![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Ordinal Data Modeling is a comprehensive treatment of ordinal data models from both likelihood and Bayesian perspectives. Written for graduate students and researchers in the statistical and social sciences, this book describes a coherent framework for understanding binary and ordinal regression models, item response models, graded response models, and ROC analyses, and for exposing the close connection between these models. A unique feature of this text is its emphasis on applications. All models developed in the book are motivated by real datasets, and considerable attention is devoted to the description of diagnostic plots and residual analyses. Software and datasets used for all analyses described in the text are available on websites listed in the preface.
Our time is characterized by an explosive growth in the use of ever more complicated and sophisticated (computer) models. These models rely on dynamical systems theory for the interpretation of their results and on probability theory for the quantification of their uncertainties. A conscientious and intelligent use of these models requires that both these theories are properly understood. This book is to provide such understanding. It gives a unifying treatment of dynamical systems theory and probability theory. It covers the basic concepts and statements of these theories, their interrelations, and their applications to scientific reasoning and physics. The book stresses the underlying concepts and mathematical structures but is written in a simple and illuminating manner without sacrificing too much mathematical rigor. The book is aimed at students, post-docs, and researchers in the applied sciences who aspire to better understand the conceptual and mathematical underpinnings of the models that they use. Despite the peculiarities of any applied science, dynamics and probability are the common and indispensable tools in any modeling effort. The book is self-contained, with many technical aspects covered in appendices, but does require some basic knowledge in analysis, linear algebra, and physics. Peter Muller, now a professor emeritus at the University of Hawaii, has worked extensively on ocean and climate models and the foundations of complex system theories.
This book presents the latest findings on statistical inference in multivariate, multilinear and mixed linear models, providing a holistic presentation of the subject. It contains pioneering and carefully selected review contributions by experts in the field and guides the reader through topics related to estimation and testing of multivariate and mixed linear model parameters. Starting with the theory of multivariate distributions, covering identification and testing of covariance structures and means under various multivariate models, it goes on to discuss estimation in mixed linear models and their transformations. The results presented originate from the work of the research group Multivariate and Mixed Linear Models and their meetings held at the Mathematical Research and Conference Center in Bedlewo, Poland, over the last 10 years. Featuring an extensive bibliography of related publications, the book is intended for PhD students and researchers in modern statistical science who are interested in multivariate and mixed linear models.
This revised textbook motivates and illustrates the techniques of applied probability by applications in electrical engineering and computer science (EECS). The author presents information processing and communication systems that use algorithms based on probabilistic models and techniques, including web searches, digital links, speech recognition, GPS, route planning, recommendation systems, classification, and estimation. He then explains how these applications work and, along the way, provides the readers with the understanding of the key concepts and methods of applied probability. Python labs enable the readers to experiment and consolidate their understanding. The book includes homework, solutions, and Jupyter notebooks. This edition includes new topics such as Boosting, Multi-armed bandits, statistical tests, social networks, queuing networks, and neural networks. For ancillaries related to this book, including examples of Python demos and also Python labs used in Berkeley, please email Mary James at [email protected]. This is an open access book.
This book provides a comprehensive methodology to measure systemic risk in many of its facets and dimensions based on state-of-the-art risk assessment methods. Systemic risk has gained attention in the public eye since the collapse of Lehman Brothers in 2008. The bankruptcy of the fourth-biggest bank in the USA raised questions whether banks that are allowed to become "too big to fail" and "too systemic to fail" should carry higher capital surcharges on their size and systemic importance. The Global Financial Crisis of 2008-2009 was followed by the Sovereign Debt Crisis in the euro area that saw the first Eurozone government de facto defaulting on its debt and prompted actions at international level to stem further domino and cascade effects to other Eurozone governments and banks. Against this backdrop, a careful measurement of systemic risk is of utmost importance for the new capital regulation to be successful and for sovereign risk to remain in check. Most importantly, the book introduces a number of systemic fragility indicators for banks and sovereigns that can help to assess systemic risk and the impact of macroprudential and microprudential policies.
Success through Statistics: Applying Metacognitive Skills to Social Science Research encourages students to recognize and cultivate self-efficacy, self-monitoring, resilience, and other metacognitive and executive function skills to overcome internal and external obstacles related to the study of statistics. The text covers the concepts introduced in a foundational statistics course while simultaneously sharpening students' metacognitive skills to inspire new belief in themselves and nurture academic success. The opening chapters develop the metacognitive framework for the statistical concepts presented throughout. Later chapters familiarize readers with statistical research methods and designs and types of measurement and data. Students form a strong understanding of basic statistical concepts and learn how to develop and test a hypothesis. Dedicated chapters discuss normal distributions and measures of variability, simple statistics with two variables, correlations and the chi-square test of independence, analysis of variance, and multiple correlation and linear regression. The text concludes with a chapter about nonparametric tests. Applied learning exercises throughout reinforce the material and immerse students in the metacognitive framework. Innovative and approachable, Success through Statistics is an ideal text for foundational courses in the discipline.
This edited collection brings together internationally recognized experts in a range of areas of statistical science to honor the contributions of the distinguished statistician, Barry C. Arnold. A pioneering scholar and professor of statistics at the University of California, Riverside, Dr. Arnold has made exceptional advancements in different areas of probability, statistics, and biostatistics, especially in the areas of distribution theory, order statistics, and statistical inference. As a tribute to his work, this book presents novel developments in the field, as well as practical applications and potential future directions in research and industry. It will be of interest to graduate students and researchers in probability, statistics, and biostatistics, as well as practitioners and technicians in the social sciences, economics, engineering, and medical sciences.
Networks of queues arise frequently as models for a wide variety of congestion phenomena. Discrete event simulation is often the only available means for studying the behavior of complex networks and many such simulations are non Markovian in the sense that the underlying stochastic process cannot be repre sented as a continuous time Markov chain with countable state space. Based on representation of the underlying stochastic process of the simulation as a gen eralized semi-Markov process, this book develops probabilistic and statistical methods for discrete event simulation of networks of queues. The emphasis is on the use of underlying regenerative stochastic process structure for the design of simulation experiments and the analysis of simulation output. The most obvious methodological advantage of simulation is that in principle it is applicable to stochastic systems of arbitrary complexity. In practice, however, it is often a decidedly nontrivial matter to obtain from a simulation information that is both useful and accurate, and to obtain it in an efficient manner. These difficulties arise primarily from the inherent variability in a stochastic system, and it is necessary to seek theoretically sound and computationally efficient methods for carrying out the simulation. Apart from implementation consider ations, important concerns for simulation relate to efficient methods for generating sample paths of the underlying stochastic process. the design of simulation ex periments, and the analysis of simulation output."
Presents an important and unique introduction to random walk theory Random walk is a stochastic process that has proven to be a useful model in understanding discrete-state discrete-time processes across a wide spectrum of scientific disciplines. Elements of Random Walk and Diffusion Processes provides an interdisciplinary approach by including numerous practical examples and exercises with real-world applications in operations research, economics, engineering, and physics. Featuring an introduction to powerful and general techniques that are used in the application of physical and dynamic processes, the book presents the connections between diffusion equations and random motion. Standard methods and applications of Brownian motion are addressed in addition to Levy motion, which has become popular in random searches in a variety of fields. The book also covers fractional calculus and introduces percolation theory and its relationship to diffusion processes. With a strong emphasis on the relationship between random walk theory and diffusion processes, Elements of Random Walk and Diffusion Processes features: * Basic concepts in probability, an overview of stochastic and fractional processes, and elements of graph theory * Numerous practical applications of random walk across various disciplines, including how to model stock prices and gambling, describe the statistical properties of genetic drift, and simplify the random movement of molecules in liquids and gases * Examples of the real-world applicability of random walk such as node movement and node failure in wireless networking, the size of the Web in computer science, and polymers in physics * Plentiful examples and exercises throughout that illustrate the solution of many practical problems Elements of Random Walk and Diffusion Processes is an ideal reference for researchers and professionals involved in operations research, economics, engineering, mathematics, and physics. The book is also an excellent textbook for upper-undergraduate and graduate level courses in probability and stochastic processes, stochastic models, random motion and Brownian theory, random walk theory, and diffusion process techniques.
This book presents innovations in the mathematical foundations of financial analysis and numerical methods for finance and applications to the modeling of risk. The topics selected include measures of risk, credit contagion, insider trading, information in finance, stochastic control and its applications to portfolio choices and liquidation, models of liquidity, pricing, and hedging. The models presented are based on the use of Brownian motion, Levy processes and jump diffusions. Moreover, fractional Brownian motion and ambit processes are also introduced at various levels. The chosen blend of topics gives an overview of the frontiers of mathematics for finance. New results, new methods and new models are all introduced in different forms according to the subject. Additionally, the existing literature on the topic is reviewed. The diversity of the topics makes the book suitable for graduate students, researchers and practitioners in the areas of financial modeling and quantitative finance. The chapters will also be of interest to experts in the financial market interested in new methods and products. This volume presents the results of the European ESF research networking program Advanced Mathematical Methods for Finance.
Classical Methods of Statistics is a guidebook combining theory and practical methods. It is especially conceived for graduate students and scientists who are interested in the applications of statistical methods to plasma physics. Thus it provides also concise information on experimental aspects of fusion-oriented plasma physics. In view of the first three basic chapters it can be fruitfully used by students majoring in probability theory and statistics. The first part deals with the mathematical foundation and framework of the subject. Some attention is given to the historical background. Exercises are added to help readers understand the underlying concepts. In the second part, two major case studies are presented which exemplify the areas of discriminant analysis and multivariate profile analysis, respectively. To introduce these case studies, an outline is provided of the context of magnetic plasma fusion research. In the third part an overview is given of statistical software; separate attention is devoted to SAS and S-PLUS. The final chapter presents several datasets and gives a description of their physical setting. Most of these datasets were assembled at the ASDEX Upgrade Tokamak. All of them are accompanied by exercises in form of guided (minor) case studies. The book concludes with translations of key concepts into several languages.
This book includes a wide selection of the papers presented at the 48th Scientific Meeting of the Italian Statistical Society (SIS2016), held in Salerno on 8-10 June 2016. Covering a wide variety of topics ranging from modern data sources and survey design issues to measuring sustainable development, it provides a comprehensive overview of the current Italian scientific research in the fields of open data and big data in public administration and official statistics, survey sampling, ordinal and symbolic data, statistical models and methods for network data, time series forecasting, spatial analysis, environmental statistics, economic and financial data analysis, statistics in the education system, and sustainable development. Intended for researchers interested in theoretical and empirical issues, this volume provides interesting starting points for further research.
This book is the second edition of Facet Theory and the Mapping Sentence: Evolving Philosophy, Use and Application (2014). It consolidates the qualitative and quantitative research positions of facet theory and delves deeper into their qualitative application in psychology, social and the behavioural sciences and in the humanities. In their traditional quantitative guise, facet theory and its mapping sentence incorporate multi-dimensional statistics. They are also a way of thinking systematically and thoroughly about the world. The book is particularly concerned with the development of the declarative mapping sentence as a tool and an approach to qualitative research. The evolution of the facet theory approach is presented along with many examples of its use in a wide variety of research domains. Since the first edition, the major advance in facet theory has been the formalization of the use of the declarative mapping sentence and this is given a prominent position in the new edition. The book will be compelling reading for students at all levels and for academics and research professionals from the humanities, social sciences and behavioural sciences.
Mathematical and Statistical Estimation Approaches in Epidemiology compiles t- oretical and practical contributions of experts in the analysis of infectious disease epidemics in a single volume. Recent collections have focused in the analyses and simulation of deterministic and stochastic models whose aim is to identify and rank epidemiological and social mechanisms responsible for disease transmission. The contributions in this volume focus on the connections between models and disease data with emphasis on the application of mathematical and statistical approaches that quantify model and data uncertainty. The book is aimed at public health experts, applied mathematicians and sci- tists in the life and social sciences, particularly graduate or advanced undergraduate students, who are interested not only in building and connecting models to data but also in applying and developing methods that quantify uncertainty in the context of infectious diseases. Chowell and Brauer open this volume with an overview of the classical disease transmission models of Kermack-McKendrick including extensions that account for increased levels of epidemiological heterogeneity. Their theoretical tour is followed by the introduction of a simple methodology for the estimation of, the basic reproduction number,R . The use of this methodology 0 is illustrated, using regional data for 1918-1919 and 1968 in uenza pandemics.
This book presents a framework for developing as well as a comprehensive collection of state-of-the-art process querying methods. Process querying combines concepts from Big Data and Process Modeling and Analysis with Business Process Intelligence and Process Analytics to study techniques for retrieving and manipulating models of real-world and envisioned processes to organize and extract process-related information for subsequent systematic use. The book comprises sixteen contributed chapters distributed over four parts and two auxiliary chapters. The auxiliary chapters by the editor provide an introduction to the area of process querying and a summary of the presented methods, techniques, and applications for process querying. The introductory chapter also examines a process querying framework. The contributed chapters present various process querying methods, including discussions on how they instantiate the framework components, thus supporting the comparison of the methods. The four parts are due to the distinctive features of the methods they include. The first three are devoted to querying event logs generated by IT-systems that support business processes at organizations, querying process designs captured in process models, and methods that address querying both event logs and process models. The methods in these three parts usually define a language for specifying process queries. The fourth part discusses methods that operate over inputs other than event logs and process models, e.g., streams of process events, or do not develop dedicated languages for specifying queries, e.g., methods for assessing process model similarity. This book is mainly intended for researchers. All the chapters in this book are contributed by active researchers in the research disciplines of business process management, process mining, and process querying. They describe state-of-the-art methods for process querying, discuss use cases of process querying, and suggest directions for future work for advancing the field. Yet, also other groups like business or data scientists and other professionals, lecturers, graduate students, and tool vendors will find relevant information for their distinctive needs. Chapter "Celonis PQL: A Query Language for Process Mining" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
The interaction of various ideas from different researchers provides a main impetus to mathematical prosress. An important way to make communication possible is through international conferences on more or less spezialized topics~ The existence of several centers for research in probabil ity and statistics in the eastern part of central Europe - somewhat vaguely described as the Pannonian area - led to the idea of organizing Pannonian Symposia on Mathematical Statistics (PS~1S). The second such symposium was held at Bad Tatzmannsdorf, Burgenland (Austria), from 14 to 20 June 1981. About 100 researchers from 13 countries participated in that event and about 70 papers were delivered. Most of the papers dealt with one of the following topics: nonparametric estimation theory, asymptotic theory of estimation, invariance principles, limit theorems and aoplications. Full versions of selected papers, all presenting new results are included in this volume. The editors take this opportunity to thank the following institutions for their assistance in making the conference possible: the Provincial Government of Burgenland, the Austrian Ministry for Research and Science, the Burgenland Chamber of Commerce, the Control Data Corporation, the Austrian Society for Statistics and Informatics, the Landes- hypothekenbank Burgenland, the Volksbank Oberwart, and the Community and Kurbad AG of Bad Tatzmannsdorf. We are also greatly indebted to all those persons who helped in editing this volume and in particular to the vii W. Grossmann et al. reds.), Probability and Statistical Inference, vii-viii.
"Et moi, ..., si j'avait su comment en revenir, je One service mathematics bas rendered the human race. It bas put common sense back n'y serais point all .' where it belongs, on the topmost shelf next to lu1esVeme the dusty canister labelled 'discarded nonsense' Eric T. Bell 1be series is divergent; therefore we may be able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and nonlineari ties abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sci ences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One ser vice topology has rendered mathematical physics ... '; 'One service logic has rendered computer science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d 'etre of this series."
Medical Risk Prediction Models: With Ties to Machine Learning is a hands-on book for clinicians, epidemiologists, and professional statisticians who need to make or evaluate a statistical prediction model based on data. The subject of the book is the patient's individualized probability of a medical event within a given time horizon. Gerds and Kattan describe the mathematical details of making and evaluating a statistical prediction model in a highly pedagogical manner while avoiding mathematical notation. Read this book when you are in doubt about whether a Cox regression model predicts better than a random survival forest. Features: All you need to know to correctly make an online risk calculator from scratch Discrimination, calibration, and predictive performance with censored data and competing risks R-code and illustrative examples Interpretation of prediction performance via benchmarks Comparison and combination of rival modeling strategies via cross-validation Thomas A. Gerds is a professor at the Biostatistics Unit at the University of Copenhagen and is affiliated with the Danish Heart Foundation. He is the author of several R-packages on CRAN and has taught statistics courses to non-statisticians for many years. Michael W. Kattan is a highly cited author and Chair of the Department of Quantitative Health Sciences at Cleveland Clinic. He is a Fellow of the American Statistical Association and has received two awards from the Society for Medical Decision Making: the Eugene L. Saenger Award for Distinguished Service, and the John M. Eisenberg Award for Practical Application of Medical Decision-Making Research.
The research and its outcomes presented here focus on spatial sampling of agricultural resources. The authors introduce sampling designs and methods for producing accurate estimates of crop production for harvests across different regions and countries. With the help of real and simulated examples performed with the open-source software R, readers will learn about the different phases of spatial data collection. The agricultural data analyzed in this book help policymakers and market stakeholders to monitor the production of agricultural goods and its effects on environment and food safety.
This is the first textbook that allows readers who may be unfamiliar with matrices to understand a variety of multivariate analysis procedures in matrix forms. By explaining which models underlie particular procedures and what objective function is optimized to fit the model to the data, it enables readers to rapidly comprehend multivariate data analysis. Arranged so that readers can intuitively grasp the purposes for which multivariate analysis procedures are used, the book also offers clear explanations of those purposes, with numerical examples preceding the mathematical descriptions. Supporting the modern matrix formulations by highlighting singular value decomposition among theorems in matrix algebra, this book is useful for undergraduate students who have already learned introductory statistics, as well as for graduate students and researchers who are not familiar with matrix-intensive formulations of multivariate data analysis. The book begins by explaining fundamental matrix operations and the matrix expressions of elementary statistics. Then, it offers an introduction to popular multivariate procedures, with each chapter featuring increasing advanced levels of matrix algebra. Further the book includes in six chapters on advanced procedures, covering advanced matrix operations and recently proposed multivariate procedures, such as sparse estimation, together with a clear explication of the differences between principal components and factor analyses solutions. In a nutshell, this book allows readers to gain an understanding of the latest developments in multivariate data science.
This book discusses supply chain management, focusing on developments within modelling the dynamic behaviour of the supply chain. Aimed at postgraduate students, researchers and practitioners, this book provides an in-depth knowledge of the dynamics of supply chains. Business trends such as the globalisation process and the increase of competition across many industrial sectors have forced companies to concentrate on their core competences and to outsource those activities in which they do not excel. As a consequence, companies no longer produce and distribute their goods in isolation, but being part of a supply chain or supply network, i.e. a set of interrelated companies who ultimately deliver the goods and services to the final customer. Despite the prevalence of supply chains as the primary form of production and distribution, their performance can be seriously hampered by the complex dynamics resulting from the collaboration and coordination (or lack thereof) among their members. This book provides the reader with modelling tools to understand, analyse and improve the dynamic behaviour of supply chains. It assembles seminal works on supply chain models and recent developments on the topic in order to provide a comprehensive, unified vision of the field for researchers and practitioners who wish to grasp the challenges of supply chain management. Aside presenting the main elements, equations and performance indicators governing the dynamics of a supply chain, and the book addresses issues such as the effect of timely and accurately sharing the information across members, the influence of restrictions on the productive capacities of their members, or the impact of the variability of the lead times, among others. Furthermore, more complex supply chain structures such as non-serial supply networks or closed-loop supply chains are modelled and discussed. Relevant managerial insights regarding the causes of supply chain underperformance, as well as avenues to improve their efficiency can be extracted from the resulting models.
This BriefBook is a much extended glossary or a much condensed handbook, depending on the way one looks at it. In encyclopedic format, it covers subjects in statistics, computing, analysis, and related fields, resulting in a book that is both an introduction and a reference for scientists and engineers, especially experimental physicists dealing with data analysis.
Graphs are used to understand the relationship between a regression model and the data to which it is fitted. The authors develop new, highly informative graphs for the analysis of regression data and for the detection of model inadequacies. As well as illustrating new procedures, the authors develop the theory of the models used, particularly for generalized linear models. The book provides statisticians and scientists with a new set of tools for data analysis. Software to produce the plots is available on the authors website.
For many practical problems, observations are not independent. In this book, limit behaviour of an important kind of dependent random variables, the so-called mixing random variables, is studied. Many profound results are given, which cover recent developments in this subject, such as basic properties of mixing variables, powerful probability and moment inequalities, weak convergence and strong convergence (approximation), limit behaviour of some statistics with a mixing sample, and many useful tools are provided. Audience: This volume will be of interest to researchers and graduate students in the field of probability and statistics, whose work involves dependent data (variables).
A non-calculus based introduction for students studying statistics, business, engineering, health sciences, social sciences, and education. It presents a thorough coverage of statistical techniques and includes numerous examples largely drawn from actual research studies. Little mathematical background is required and explanations of important concepts are based on providing intuition using illustrative figures and numerical examples. The first part shows how statistical methods are used in diverse fields in answering important questions, while part two covers descriptive statistics and considers the organisation and summarisation of data. Parts three to five cover probability, statistical inference, and more advanced statistical techniques. |
You may like...
Two-Seat Spitfires - The Complete Story
Greg Davis, John Sanderson and Peter Arnold
Hardcover
R1,012
Discovery Miles 10 120
|