Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
This volume is composed of peer-reviewed papers that have developed from the First Conference of the International Society for NonParametric Statistics (ISNPS). This inaugural conference took place in Chalkidiki, Greece, June 15-19, 2012. It was organized with the co-sponsorship of the IMS, the ISI, and other organizations. M.G. Akritas, S.N. Lahiri, and D.N. Politis are the first executive committee members of ISNPS, and the editors of this volume. ISNPS has a distinguished Advisory Committee that includes Professors R.Beran, P.Bickel, R. Carroll, D. Cook, P. Hall, R. Johnson, B. Lindsay, E. Parzen, P. Robinson, M. Rosenblatt, G. Roussas, T. SubbaRao, and G. Wahba. The Charting Committee of ISNPS consists of more than 50 prominent researchers from all over the world. The chapters in this volume bring forth recent advances and trends in several areas of nonparametric statistics. In this way, the volume facilitates the exchange of research ideas, promotes collaboration among researchers from all over the world, and contributes to the further development of the field.The conference program included over 250 talks, including special invited talks, plenary talks, and contributed talks on all areas of nonparametric statistics. Out of these talks, some of the most pertinent ones have been refereed and developed into chapters that share both research and developments in the field.
Recent years have seen significant advances in the use of risk analysis in many government agencies and private corporations. These advances are reflected both in the state of practice of risk analysis, and in the status of governmental requirements and industry standards. Because current risk and reliability models are often used to regulatory decisions, it is critical that inference methods used in these models be robust and technically sound. The goal of Bayesian Inference for Probabilistic Risk Assessment is to provide a Bayesian foundation for framing probabilistic problems and performing inference on these problems. It is aimed at scientists and engineers who perform or review risk analyses and it provides an analytical structure for combining data and information from various sources to generate estimates of the parameters of uncertainty distributions used in risk and reliability models. Inference in the book employs a modern computational approach known as Markov chain Monte Carlo (MCMC). MCMC methods were described in the early 1950s in research into Monte Carlo sampling at Los Alamos. Recently, with the advance of computing power and improved analysis algorithms, MCMC is increasingly being used for a wide range of Bayesian inference problems in a variety of disciplines. MCMC is effectively (although not literally) numerical (Monte Carlo) integration by way of Markov chains. Inference is performed by sampling from a target distribution (i.e., a specially constructed Markov chain, based upon the inference problem) until convergence (to the posterior distribution) is achieved. The MCMC approach may be implemented using custom-written routines or existing general purpose commercial or open-source software. This book uses an open-source program called OpenBUGS (commonly referred to as WinBUGS) to solve the inference problems that are described. A powerful feature of OpenBUGS is its automatic selection of an appropriate MCMC sampling scheme for a given problem. The approach that is taken in this book is to provide analysis "building blocks" that can be modified, combined, or used as-is to solve a variety of challenging problems. The MCMC approach used is implemented via textual scripts similar to a macro-type programming language. Accompanying each script is a graphical Bayesian network illustrating the elements of the script and the overall inference problem being solved. The book also covers the important topic of MCMC convergence.
Since the early eighties, Ali Suleyman Ustunelhas beenone of the
main contributors to the field of Malliavin calculus. In a workshop
held in Paris, June 2010 several prominent researchers gave
exciting talks in honor of his 60th birthday. The present volume
includes scientific contributions from this workshop.
In this thesis, the author develops numerical techniques for tracking and characterising the convoluted nodal lines in three-dimensional space, analysing their geometry on the small scale, as well as their global fractality and topological complexity---including knotting---on the large scale. The work is highly visual, and illustrated with many beautiful diagrams revealing this unanticipated aspect of the physics of waves. Linear superpositions of waves create interference patterns, which means in some places they strengthen one another, while in others they completely cancel each other out. This latter phenomenon occurs on 'vortex lines' in three dimensions. In general wave superpositions modelling e.g. chaotic cavity modes, these vortex lines form dense tangles that have never been visualised on the large scale before, and cannot be analysed mathematically by any known techniques.
This book provides a contemporary treatment of quantitative economics, with a focus on data science. The book introduces the reader to R and RStudio, and uses expert Hadley Wickham's tidyverse package for different parts of the data analysis workflow. After a gentle introduction to R code, the reader's R skills are gradually honed, with the help of "your turn" exercises. At the heart of data science is data, and the book equips the reader to import and wrangle data, (including network data). Very early on, the reader will begin using the popular ggplot2 package for visualizing data, even making basic maps. The use of R in understanding functions, simulating difference equations, and carrying out matrix operations is also covered. The book uses Monte Carlo simulation to understand probability and statistical inference, and the bootstrap is introduced. Causal inference is illuminated using simulation, data graphs, and R code for applications with real economic examples, covering experiments, matching, regression discontinuity, difference-in-difference, and instrumental variables. The interplay of growth related data and models is presented, before the book introduces the reader to time series data analysis with graphs, simulation, and examples. Lastly, two computationally intensive methods-generalized additive models and random forests (an important and versatile machine learning method)-are introduced intuitively with applications. The book will be of great interest to economists-students, teachers, and researchers alike-who want to learn R. It will help economics students gain an intuitive appreciation of applied economics and enjoy engaging with the material actively, while also equipping them with key data science skills.
Written for professionals looking to build data science and analytics capabilities within their organizations as well as those who wish to expand their knowledge and advance their careers in the data space Shows how to build a fit-for-purpose data science capability in a manner that avoids the most common pitfalls Most data strategy works 'top-down' by providing technical solutions to perceived organizational needs. This book uses emergent design, an evolutionary approach that increases the chances of successful outcomes while minimising upfront investment
This book presents selected peer-reviewed contributions from the International Work-Conference on Time Series, ITISE 2017, held in Granada, Spain, September 18-20, 2017. It discusses topics in time series analysis and forecasting, including advanced mathematical methodology, computational intelligence methods for time series, dimensionality reduction and similarity measures, econometric models, energy time series forecasting, forecasting in real problems, online learning in time series as well as high-dimensional and complex/big data time series. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics.
Presents a unique study of Integrative Problem-Solving (IPS). The consideration of 'Decadence' is essential in the scientific study of environmental and other problems and their rigorous solution, because the broad context within which the problems emerge can affect their solution. Stochastic reasoning underlines the conceptual and methodological framework of IPS, and its formulation has a mathematical life of its own that accounts for the multidisciplinarity of real world problems, the multisourced uncertainties characterizing their solution, and the different thinking modes of the people involved. Only by interpolating between the full range of disciplines (including stochastic mathematics, physical science, neuropsychology, philosophy, and sociology) and the associated thinking modes can scientists arrive at a satisfactory account of problem-solving, and be able to distinguish between a technically complete problem-solution, and a solution that has social impact.
Almost all published results on multivariate t-distributions (spanning the last 50 years) are collected in this comprehensive reference. The book begins with theoretical probabilistic results and then presents statistical aspects. Generalizations and applications are dealt with in the final chapters, including material on estimation and regression models of special value for practitioners in statistics and economics. A comprehensive bibliography of over 350 references is included.
This book introduces the basic methodologies for successful data analytics. Matrix optimization and approximation are explained in detail and extensively applied to dimensionality reduction by principal component analysis and multidimensional scaling. Diffusion maps and spectral clustering are derived as powerful tools. The methodological overlap between data science and machine learning is emphasized by demonstrating how data science is used for classification as well as supervised and unsupervised learning.
This book presents the proceedings of the international conference Particle Systems and Partial Differential Equations I, which took place at the Centre of Mathematics of the University of Minho, Braga, Portugal, from the 5th to the 7th of December, 2012. The purpose of the conference was to bring together world leaders to discuss their topics of expertise and to present some of their latest research developments in those fields. Among the participants were researchers in probability, partial differential equations and kinetics theory. The aim of the meeting was to present to a varied public the subject of interacting particle systems, its motivation from the viewpoint of physics and its relation with partial differential equations or kinetics theory and to stimulate discussions and possibly new collaborations among researchers with different backgrounds. The book contains lecture notes written by Francois Golse on the derivation of hydrodynamic equations (compressible and incompressible Euler and Navier-Stokes) from the Boltzmann equation, and several short papers written by some of the participants in the conference. Among the topics covered by the short papers are hydrodynamic limits; fluctuations; phase transitions; motions of shocks and anti shocks in exclusion processes; large number asymptotics for systems with self-consistent coupling; quasi-variational inequalities; unique continuation properties for PDEs and others. The book will benefit probabilists, analysts and mathematicians who are interested in statistical physics, stochastic processes, partial differential equations and kinetics theory, along with physicists."
This book discusses a link between statistical theory and quantum theory based on the concept of epistemic processes. The latter are processes, such as statistical investigations or quantum mechanical measurements, that can be used to obtain knowledge about something. Various topics in quantum theory are addressed, including the construction of a Hilbert space from reasonable assumptions and an interpretation of quantum states. Separate derivations of the Born formula and the one-dimensional Schroedinger equation are given. In concrete terms, a Hilbert space can be constructed under some technical assumptions associated with situations where there are two conceptual variables that can be seen as maximally accessible. Then to every accessible conceptual variable there corresponds an operator on this Hilbert space, and if the variables take a finite number of values, the eigenspaces/eigenvectors of these operators correspond to specific questions in nature together with sharp answers to these questions. This paves a new way to the foundations of quantum theory. The resulting interpretation of quantum mechanics is related to Herve Zwirn's recent Convivial Solipsism, but it also has some relations to Quantum Bayesianism and to Rovelli's relational quantum mechanics. Niels Bohr's concept of complementarity plays an important role. Philosophical implications of this approach to quantum theory are discussed, including consequences for macroscopic settings. The book will benefit a broad readership, including physicists and statisticians interested in the foundations of their disciplines, philosophers of science and graduate students, and anyone with a reasonably good background in mathematics and an open mind.
This book is a tribute to Professor Pedro Gil, who created the Department of Statistics, OR and TM at the University of Oviedo, and a former President of the Spanish Society of Statistics and OR (SEIO). In more than eighty original contributions, it illustrates the extent to which Mathematics can help manage uncertainty, a factor that is inherent to real life. Today it goes without saying that, in order to model experiments and systems and to analyze related outcomes and data, it is necessary to consider formal ideas and develop scientific approaches and techniques for dealing with uncertainty. Mathematics is crucial in this endeavor, as this book demonstrates. As Professor Pedro Gil highlighted twenty years ago, there are several well-known mathematical branches for this purpose, including Mathematics of chance (Probability and Statistics), Mathematics of communication (Information Theory), and Mathematics of imprecision (Fuzzy Sets Theory and others). These branches often intertwine, since different sources of uncertainty can coexist, and they are not exhaustive. While most of the papers presented here address the three aforementioned fields, some hail from other Mathematical disciplines such as Operations Research; others, in turn, put the spotlight on real-world studies and applications. The intended audience of this book is mainly statisticians, mathematicians and computer scientists, but practitioners in these areas will certainly also find the book a very interesting read.
The revised and expanded edition of this textbook presents the concepts and applications of random processes with the same illuminating simplicity as its first edition, but with the notable addition of substantial modern material on biological modeling. While still treating many important problems in fields such as engineering and mathematical physics, the book also focuses on the highly relevant topics of cancerous mutations, influenza evolution, drug resistance, and immune response. The models used elegantly apply various classical stochastic models presented earlier in the text, and exercises are included throughout to reinforce essential concepts. The second edition of Classical and Spatial Stochastic Processes is suitable as a textbook for courses in stochastic processes at the advanced-undergraduate and graduate levels, or as a self-study resource for researchers and practitioners in mathematics, engineering, physics, and mathematical biology. Reviews of the first edition: An appetizing textbook for a first course in stochastic processes. It guides the reader in a very clever manner from classical ideas to some of the most interesting modern results. ... All essential facts are presented with clear proofs, illustrated by beautiful examples. ... The book is well organized, has informative chapter summaries, and presents interesting exercises. The clear proofs are concentrated at the ends of the chapters making it easy to find the results. The style is a good balance of mathematical rigorosity and user-friendly explanation. -Biometric Journal This small book is well-written and well-organized. ... Only simple results are treated ... but at the same time many ideas needed for more complicated cases are hidden and in fact very close. The second part is a really elementary introduction to the area of spatial processes. ... All sections are easily readable and it is rather tentative for the reviewer to learn them more deeply by organizing a course based on this book. The reader can be really surprised seeing how simple the lectures on these complicated topics can be. At the same time such important questions as phase transitions and their properties for some models and the estimates for certain critical values are discussed rigorously. ... This is indeed a first course on stochastic processes and also a masterful introduction to some modern chapters of the theory. -Zentralblatt Math
The research articles in this volume cover timely quantitative psychology topics, including new methods in item response theory, computerized adaptive testing, cognitive diagnostic modeling, and psychological scaling. Topics within general quantitative methodology include structural equation modeling, factor analysis, causal modeling, mediation, missing data methods, and longitudinal data analysis. These methods will appeal, in particular, to researchers in the social sciences. The 80th annual meeting took place in Beijing, China, between the 12th and 16th of July, 2015. Previous volumes to showcase work from the Psychometric Society's Meeting are New Developments in Quantitative Psychology: Presentations from the 77th Annual Psychometric Society Meeting (Springer, 2013), Quantitative Psychology Research: The 78th Annual Meeting of the Psychometric Society (Springer, 2015), and Quantitative Psychology Research: The 79th Annual Meeting of the Psychometric Society, Wisconsin, USA, 2014 (Springer, 2015).
This book offers a practical guide to Agent Based economic modeling, adopting a "learning by doing" approach to help the reader master the fundamental tools needed to create and analyze Agent Based models. After providing them with a basic "toolkit" for Agent Based modeling, it present and discusses didactic models of real financial and economic systems in detail. While stressing the main features and advantages of the bottom-up perspective inherent to this approach, the book also highlights the logic and practical steps that characterize the model building procedure. A detailed description of the underlying codes, developed using R and C, is also provided. In addition, each didactic model is accompanied by exercises and applications designed to promote active learning on the part of the reader. Following the same approach, the book also presents several complementary tools required for the analysis and validation of the models, such as sensitivity experiments, calibration exercises, economic network and statistical distributions analysis. By the end of the book, the reader will have gained a deeper understanding of the Agent Based methodology and be prepared to use the fundamental techniques required to start developing their own economic models. Accordingly, "Economics with Heterogeneous Interacting Agents" will be of particular interest to graduate and postgraduate students, as well as to academic institutions and lecturers interested in including an overview of the AB approach to economic modeling in their courses.
This monograph highlights the connection between the theoretical work done by research statisticians and the impact that work has on various industries. Drawing on decades of experience as an industry consultant, the author details how his contributions have had a lasting impact on the field of statistics as a whole. Aspiring statisticians and data scientists will be motivated to find practical applications for their knowledge, as they see how such work can yield breakthroughs in their field. Each chapter highlights a consulting position the author held that resulted in a significant contribution to statistical theory. Topics covered include tracking processes with change points, estimating common parameters, crossing fields with absorption points, military operations research, sampling surveys, stochastic visibility in random fields, reliability analysis, applied probability, and more. Notable advancements within each of these topics are presented by analyzing the problems facing various industries, and how solving those problems contributed to the development of the field. The Career of a Research Statistician is ideal for researchers, graduate students, or industry professionals working in statistics. It will be particularly useful for up-and-coming statisticians interested in the promising connection between academia and industry.
This volume presents the latest advances and trends in nonparametric statistics, and gathers selected and peer-reviewed contributions from the 3rd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Avignon, France on June 11-16, 2016. It covers a broad range of nonparametric statistical methods, from density estimation, survey sampling, resampling methods, kernel methods and extreme values, to statistical learning and classification, both in the standard i.i.d. case and for dependent data, including big data. The International Society for Nonparametric Statistics is uniquely global, and its international conferences are intended to foster the exchange of ideas and the latest advances among researchers from around the world, in cooperation with established statistical societies such as the Institute of Mathematical Statistics, the Bernoulli Society and the International Statistical Institute. The 3rd ISNPS conference in Avignon attracted more than 400 researchers from around the globe, and contributed to the further development and dissemination of nonparametric statistics knowledge.
Aside from distribution theory, projections and the singular value decomposition (SVD) are the two most important concepts for understanding the basic mechanism of multivariate analysis. The former underlies the least squares estimation in regression analysis, which is essentially a projection of one subspace onto another, and the latter underlies principal component analysis, which seeks to find a subspace that captures the largest variability in the original space. This book is about projections and SVD. A thorough discussion of generalized inverse (g-inverse) matrices is also given because it is closely related to the former. The book provides systematic and in-depth accounts of these concepts from a unified viewpoint of linear transformations finite dimensional vector spaces. More specially, it shows that projection matrices (projectors) and g-inverse matrices can be defined in various ways so that a vector space is decomposed into a direct-sum of (disjoint) subspaces. Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition will be useful for researchers, practitioners, and students in applied mathematics, statistics, engineering, behaviormetrics, and other fields.
This textbook is the result of the enhancement of several courses on non-equilibrium statistics, stochastic processes, stochastic differential equations, anomalous diffusion and disorder. The target audience includes students of physics, mathematics, biology, chemistry, and engineering at undergraduate and graduate level with a grasp of the basic elements of mathematics and physics of the fourth year of a typical undergraduate course. The little-known physical and mathematical concepts are described in sections and specific exercises throughout the text, as well as in appendices. Physical-mathematical motivation is the main driving force for the development of this text. It presents the academic topics of probability theory and stochastic processes as well as new educational aspects in the presentation of non-equilibrium statistical theory and stochastic differential equations.. In particular it discusses the problem of irreversibility in that context and the dynamics of Fokker-Planck. An introduction on fluctuations around metastable and unstable points are given. It also describes relaxation theory of non-stationary Markov periodic in time systems. The theory of finite and infinite transport in disordered networks, with a discussion of the issue of anomalous diffusion is introduced. Further, it provides the basis for establishing the relationship between quantum aspects of the theory of linear response and the calculation of diffusion coefficients in amorphous systems.
Baseball fans are often passionate about statistics, but true numbers fanatics want to go beyond the 'baseball card' stats and make comparisons through other objective means. ""Sabermetrics"" uses algebra to expand on statistics and measure a player's value to his team and how he ranks among players of different eras. The mathematical models in this book, a follow-up to ""Understanding Sabermetrics"" (2008), define the measures, supply examples, and provide practice problems for readers.
This book provides an overview of the application of statistical methods to problems in metrology, with emphasis on modelling measurement processes and quantifying their associated uncertainties. It covers everything from fundamentals to more advanced special topics, each illustrated with case studies from the authors' work in the Nuclear Security Enterprise (NSE). The material provides readers with a solid understanding of how to apply the techniques to metrology studies in a wide variety of contexts. The volume offers particular attention to uncertainty in decision making, design of experiments (DOEx) and curve fitting, along with special topics such as statistical process control (SPC), assessment of binary measurement systems, and new results on sample size selection in metrology studies. The methodologies presented are supported with R script when appropriate, and the code has been made available for readers to use in their own applications. Designed to promote collaboration between statistics and metrology, this book will be of use to practitioners of metrology as well as students and researchers in statistics and engineering disciplines.
In real-life decision-making situations it is necessary to make decisions with incomplete information, for oftentimes uncertain results. In "Decision-Making Under Uncertainty," Dr. Chacko applies his years of statistical research and experience to the analysis of twenty-four real-life decision-making situations, both those with few data points (eg: Cuban Missile Crisis), and many data points (eg: aspirin for heart attack prevention). These situations encompass decision-making in a variety of business, social and political, physical and biological, and military environments. Though different, all of these have one characteristic in common: their outcomes are uncertain/unkown, and unknowable. Chacko Demonstrates how the decision-maker can reduce uncertainty by choosing probable outcomes using the statistical methods he introduces. This detailed volume develops standard statistical concepts (t, x2, normal distribution, ANOVA), and the less familiar concepts (logical probability, subjective probability, Bayesian Inference, Penalty for Non-Fulfillment, Bluff-Threats Matrix, etc.). Chacko also offers a thorough discussion of the underlying theoretical principles. The end of each chapter contains a set of questions, three quarters of which focus on concepts, formulation, conclusion, resource commitments, and caveats; only one quarter with computations. Ideal for the practitioner, the work is also designed to serve as the primary text for graduate or advanced undergraduate courses in statistics and decision science.
This book presents the R software environment as a key tool for oceanographic computations and provides a rationale for using R over the more widely-used tools of the field such as MATLAB. Kelley provides a general introduction to R before introducing the 'oce' package. This package greatly simplifies oceanographic analysis by handling the details of discipline-specific file formats, calculations, and plots. Designed for real-world application and developed with open-source protocols, oce supports a broad range of practical work. Generic functions take care of general operations such as subsetting and plotting data, while specialized functions address more specific tasks such as tidal decomposition, hydrographic analysis, and ADCP coordinate transformation. In addition, the package makes it easy to document work, because its functions automatically update processing logs stored within its data objects. Kelley teaches key R functions using classic examples from the history of oceanography, specifically the work of Alfred Redfield, Gordon Riley, J. Tuzo Wilson, and Walter Munk. Acknowledging the pervasive popularity of MATLAB, the book provides advice to users who would like to switch to R. Including a suite of real-life applications and over 100 exercises and solutions, the treatment is ideal for oceanographers, technicians, and students who want to add R to their list of tools for oceanographic analysis.
Statistical Decision Problems presents a quick and concise introduction into the theory of risk, deviation and error measures that play a key role in statistical decision problems. It introduces state-of-the-art practical decision making through twenty-one case studies from real-life applications. The case studies cover a broad area of topics and the authors include links with source code and data, a very helpful tool for the reader. In its core, the text demonstrates how to use different factors to formulate statistical decision problems arising in various risk management applications, such as optimal hedging, portfolio optimization, cash flow matching, classification, and more. The presentation is organized into three parts: selected concepts of statistical decision theory, statistical decision problems, and case studies with portfolio safeguard. The text is primarily aimed at practitioners in the areas of risk management, decision making, and statistics. However, the inclusion of a fair bit of mathematical rigor renders this monograph an excellent introduction to the theory of general error, deviation, and risk measures for graduate students. It can be used as supplementary reading for graduate courses including statistical analysis, data mining, stochastic programming, financial engineering, to name a few. The high level of detail may serve useful to applied mathematicians, engineers, and statisticians interested in modeling and managing risk in various applications. |
You may like...
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,549
Discovery Miles 25 490
Quantitative statistical techniques
A. Swanepoel, F.L. Vivier, …
Paperback
Introductory Statistics Achieve access…
Stephen Kokoska
Mixed media product
R2,284
Discovery Miles 22 840
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,284
Discovery Miles 22 840
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,411
Discovery Miles 54 110
|