![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
Statistical Decision Problems presents a quick and concise introduction into the theory of risk, deviation and error measures that play a key role in statistical decision problems. It introduces state-of-the-art practical decision making through twenty-one case studies from real-life applications. The case studies cover a broad area of topics and the authors include links with source code and data, a very helpful tool for the reader. In its core, the text demonstrates how to use different factors to formulate statistical decision problems arising in various risk management applications, such as optimal hedging, portfolio optimization, cash flow matching, classification, and more. The presentation is organized into three parts: selected concepts of statistical decision theory, statistical decision problems, and case studies with portfolio safeguard. The text is primarily aimed at practitioners in the areas of risk management, decision making, and statistics. However, the inclusion of a fair bit of mathematical rigor renders this monograph an excellent introduction to the theory of general error, deviation, and risk measures for graduate students. It can be used as supplementary reading for graduate courses including statistical analysis, data mining, stochastic programming, financial engineering, to name a few. The high level of detail may serve useful to applied mathematicians, engineers, and statisticians interested in modeling and managing risk in various applications.
This book covers applied statistics for the social sciences with upper-level undergraduate students in mind. The chapters are based on lecture notes from an introductory statistics course the author has taught for a number of years. The book integrates statistics into the research process, with early chapters covering basic philosophical issues underpinning the process of scientific research. These include the concepts of deductive reasoning and the falsifiability of hypotheses, the development of a research question and hypotheses, and the process of data collection and measurement. Probability theory is then covered extensively with a focus on its role in laying the foundation for statistical reasoning and inference. After illustrating the Central Limit Theorem, later chapters address the key, basic statistical methods used in social science research, including various z and t tests and confidence intervals, nonparametric chi square tests, one-way analysis of variance, correlation, simple regression, and multiple regression, with a discussion of the key issues involved in thinking about causal processes. Concepts and topics are illustrated using both real and simulated data. The penultimate chapter presents rules and suggestions for the successful presentation of statistics in tabular and graphic formats, and the final chapter offers suggestions for subsequent reading and study.
by S. Geisser.- Fisher, R.A. (1922) On the Mathematical Foundations of Theoretical Statistics.- by T.W. Anderson.- Hotelling, H. (1931) The Generalization of Student's Ratio.- by E.L. Lehmann.- Neyman, J. and Pearson, E.S. (1933) On the Problem of the Most Efficient Tests of Statistical Hypotheses.- by D.A.S. Fraser.- by D.A.S. Fraser.- by R.E. Barlow.- de Finetti, B. (1937) Foresight: It's Logical Laws, Its Subjective Sources.- by M.R. Leadbetter.- Cramer, H. (1942) On Harmonic Analysis in Certain Functional Spaces.- by R.L. Smith.- Gnedenko, B.V. (1943) On the Limiting Distribution of the Maximum Term in a Random Series.- by P.K. Pathak.- Rao, C.R. (1945) Information and the Accuracy Attainable in the Estimation of Statistical Parameters.- by B.K. Ghosh.- Wald, A. (1945) Sequential Tests of Statistical Hypotheses.- by P.K. Sen.- Hoeffding, W. (1948) A Class of Statistics with Asymptotically Normal Distribution.- by L. Weiss.- Wald, A. (1949) Statistical Decision Functions.- by D.V. Lindley.- by D.V. Lindley.- by I.J. Good.- Robbins, H.E. (1955) An Empirical Bayes Approach to Statistics.- by H.P. Wynn.- Kiefer, J.C. (1959) Optimum Experimental Designs.- by B. Efron.- by B. Efron.- by J.F. BjTHrnstad.- Birnbaum, A. (1962) On the Foundations of Statistical Inference.- by W.U. DuMouchel.- Edwards, W., Lindman, H., and Savage, L.J. (1963) Bayesian Statistical Inference for Psychological Research.- by N. Reid.- Fraser, D.A.S. (1966) Structural Probability and a Generalization.- by J. de Leeuw.- Akaike, H. (1973) Information Theory and an Extension of the Maximum Likelihood Principle.
This textbook is the result of the enhancement of several courses on non-equilibrium statistics, stochastic processes, stochastic differential equations, anomalous diffusion and disorder. The target audience includes students of physics, mathematics, biology, chemistry, and engineering at undergraduate and graduate level with a grasp of the basic elements of mathematics and physics of the fourth year of a typical undergraduate course. The little-known physical and mathematical concepts are described in sections and specific exercises throughout the text, as well as in appendices. Physical-mathematical motivation is the main driving force for the development of this text. It presents the academic topics of probability theory and stochastic processes as well as new educational aspects in the presentation of non-equilibrium statistical theory and stochastic differential equations.. In particular it discusses the problem of irreversibility in that context and the dynamics of Fokker-Planck. An introduction on fluctuations around metastable and unstable points are given. It also describes relaxation theory of non-stationary Markov periodic in time systems. The theory of finite and infinite transport in disordered networks, with a discussion of the issue of anomalous diffusion is introduced. Further, it provides the basis for establishing the relationship between quantum aspects of the theory of linear response and the calculation of diffusion coefficients in amorphous systems.
With an emphasis on models and techniques, this textbook introduces many of the fundamental concepts of stochastic modeling that are now a vital component of almost every scientific investigation. In particular, emphasis is placed onlaying the foundationfor solvingproblemsin reliability, insurance, finance, and credit risk. The material has been carefully selected to cover the basic concepts and techniques on each topic, making this an ideal introductory gateway to more advanced learning. With exercises and solutions to selected problems accompanying each chapter, this textbook is for a wide audience including advanced undergraduate and beginning-level graduate students, researchers, and practitioners in mathematics, statistics, engineering, and economics."
This book provides a rigorous mathematical treatment of the non-linear stochastic filtering problem using modern methods. Particular emphasis is placed on the theoretical analysis of numerical methods for the solution of the filtering problem via particle methods. The book should provide sufficient background to enable study of the recent literature. While no prior knowledge of stochastic filtering is required, readers are assumed to be familiar with measure theory, probability theory and the basics of stochastic processes. Most of the technical results that are required are stated and proved in the appendices. Exercises and solutions are included.
The papers in this volume are based on lectures given at the IMA Workshop on Grid Generation and Adaptive Algorithms held during April 28 - May 2, 1997. Grid generation is a common feature of many computational tasks which require the discretization and representation of space and surfaces. The papers in this volume discuss how the geometric complexity of the physical object or the non-uniform nature of the solution variable make it impossible to use a uniform grid. Since an efficient grid requires knowledge of the computed solution, many of the papers in this volume treat how to construct grids that are adaptively computed with the solution. This volume will be of interest to computational scientists and mathematicians working in a broad variety of applications including fluid mechanics, solid mechanics, materials science, chemistry, and physics. Papers treat residual-based error estimation and adaptivity, repartitioning and load balancing for adaptive meshes, data structures and local refinement methods for conservation laws, adaptivity for hp-finite element methods, the resolution of boundary layers in high Reynolds number flow, adaptive methods for elastostatic contact problems, the full domain partition approach to parallel adaptive refinement, the adaptive solution of phase change problems, and quality indicators for triangular meshes.
Sensor Data Fusion is the process of combining incomplete and imperfect pieces of mutually complementary sensor information in such a way that a better understanding of an underlying real-world phenomenon is achieved. Typically, this insight is either unobtainable otherwise or a fusion result exceeds what can be produced from a single sensor output in accuracy, reliability, or cost. This book provides an introduction Sensor Data Fusion, as an information technology as well as a branch of engineering science and informatics. Part I presents a coherent methodological framework, thus providing the prerequisites for discussing selected applications in Part II of the book. The presentation mirrors the author's views on the subject and emphasizes his own contributions to the development of particular aspects. With some delay, Sensor Data Fusion is likely to develop along lines similar to the evolution of another modern key technology whose origin is in the military domain, the Internet. It is the author's firm conviction that until now, scientists and engineers have only scratched the surface of the vast range of opportunities for research, engineering, and product development that still waits to be explored: the Internet of the Sensors.
This book provides a contemporary treatment of quantitative economics, with a focus on data science. The book introduces the reader to R and RStudio, and uses expert Hadley Wickham's tidyverse package for different parts of the data analysis workflow. After a gentle introduction to R code, the reader's R skills are gradually honed, with the help of "your turn" exercises. At the heart of data science is data, and the book equips the reader to import and wrangle data, (including network data). Very early on, the reader will begin using the popular ggplot2 package for visualizing data, even making basic maps. The use of R in understanding functions, simulating difference equations, and carrying out matrix operations is also covered. The book uses Monte Carlo simulation to understand probability and statistical inference, and the bootstrap is introduced. Causal inference is illuminated using simulation, data graphs, and R code for applications with real economic examples, covering experiments, matching, regression discontinuity, difference-in-difference, and instrumental variables. The interplay of growth related data and models is presented, before the book introduces the reader to time series data analysis with graphs, simulation, and examples. Lastly, two computationally intensive methods-generalized additive models and random forests (an important and versatile machine learning method)-are introduced intuitively with applications. The book will be of great interest to economists-students, teachers, and researchers alike-who want to learn R. It will help economics students gain an intuitive appreciation of applied economics and enjoy engaging with the material actively, while also equipping them with key data science skills.
This volume presents the latest advances and trends in nonparametric statistics, and gathers selected and peer-reviewed contributions from the 3rd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Avignon, France on June 11-16, 2016. It covers a broad range of nonparametric statistical methods, from density estimation, survey sampling, resampling methods, kernel methods and extreme values, to statistical learning and classification, both in the standard i.i.d. case and for dependent data, including big data. The International Society for Nonparametric Statistics is uniquely global, and its international conferences are intended to foster the exchange of ideas and the latest advances among researchers from around the world, in cooperation with established statistical societies such as the Institute of Mathematical Statistics, the Bernoulli Society and the International Statistical Institute. The 3rd ISNPS conference in Avignon attracted more than 400 researchers from around the globe, and contributed to the further development and dissemination of nonparametric statistics knowledge.
This book is devoted to the scientific legacy of Professor Victor Ambartsumian - one of the distinguished scientists of the last century. He obtained very essential results not only in astrophysics, but also in mathematics and theoretical physics. One can recall his fundamental results concerning the Sturm-Liouville inverse problem, quantum field theory, structure of atomic nuclei etc. Nevertheless, his revolutionary ideas in astrophysics and corresponding results are known more widely and have predetermined the further development of this science. The concept about the activity phenomena and objects' evolution, particularly, determination of the age of our Galaxy, ideas about the stars' formation nowadays in stellar associations, the activity of galactic nuclei appeared to be exceptionally fruitful. These directions are being elaborated at many astronomical centers all over the world.
This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance." The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, such as out-of-the-box regression, cannot help in discovering cause. This new way of looking at uncertainty ties together disparate fields - probability, physics, biology, the "soft" sciences, computer science - because each aims at discovering cause (of effects). It broadens the understanding beyond frequentist and Bayesian methods to propose a Third Way of modeling.
This second edition sees the light three years after the first one: too short a time to feel seriously concerned to redesign the entire book, but sufficient to be challenged by the prospect of sharpening our investigation on the working of econometric dynamic models and to be inclined to change the title of the new edition by dropping the "Topics in" of the former edition. After considerable soul searching we agreed to include several results related to topics already covered, as well as additional sections devoted to new and sophisticated techniques, which hinge mostly on the latest research work on linear matrix polynomials by the second author. This explains the growth of chapter one and the deeper insight into representation theorems in the last chapter of the book. The role of the second chapter is that of providing a bridge between the mathematical techniques in the backstage and the econometric profiles in the forefront of dynamic modelling. For this purpose, we decided to add a new section where the reader can find the stochastic rationale of vector autoregressive specifications in econometrics. The third (and last) chapter improves on that of the first edition by re- ing the fruits of the thorough analytic equipment previously drawn up."
This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications. Avoiding a "theorem-proof" format, it shows concrete applications on a variety of empirical time series. The book can be used in graduate courses in nonlinear time series and at the same time also includes interesting material for more advanced readers. Though it is largely self-contained, readers require an understanding of basic linear time series concepts, Markov chains and Monte Carlo simulation methods. The book covers time-domain and frequency-domain methods for the analysis of both univariate and multivariate (vector) time series. It makes a clear distinction between parametric models on the one hand, and semi- and nonparametric models/methods on the other. This offers the reader the option of concentrating exclusively on one of these nonlinear time series analysis methods. To make the book as user friendly as possible, major supporting concepts and specialized tables are appended at the end of every chapter. In addition, each chapter concludes with a set of key terms and concepts, as well as a summary of the main findings. Lastly, the book offers numerous theoretical and empirical exercises, with answers provided by the author in an extensive solutions manual.
The statistics profession is at a unique point in history. The
need for valid statistical tools is greater than ever; data sets
are massive, often measuring hundreds of thousands of measurements
for a single subject.The field is ready to move towards clear
objective benchmarks under which tools can be evaluated. Targeted
learning allows (1) the full generalization and utilization of
cross-validation as an estimator selection tool so that the
subjective choices made by humans are now made by the machine, and
(2) targeting the fitting of the probability distribution of the
data toward the target parameter representing the scientific
question of interest.
This monograph highlights the connection between the theoretical work done by research statisticians and the impact that work has on various industries. Drawing on decades of experience as an industry consultant, the author details how his contributions have had a lasting impact on the field of statistics as a whole. Aspiring statisticians and data scientists will be motivated to find practical applications for their knowledge, as they see how such work can yield breakthroughs in their field. Each chapter highlights a consulting position the author held that resulted in a significant contribution to statistical theory. Topics covered include tracking processes with change points, estimating common parameters, crossing fields with absorption points, military operations research, sampling surveys, stochastic visibility in random fields, reliability analysis, applied probability, and more. Notable advancements within each of these topics are presented by analyzing the problems facing various industries, and how solving those problems contributed to the development of the field. The Career of a Research Statistician is ideal for researchers, graduate students, or industry professionals working in statistics. It will be particularly useful for up-and-coming statisticians interested in the promising connection between academia and industry.
This book proposes the formulation of an efficient methodology that estimates energy system uncertainty and predicts Remaining Useful Life (RUL) accurately with significantly reduced RUL prediction uncertainty. Renewable and non-renewable sources of energy are being used to supply the demands of societies worldwide. These sources are mainly thermo-chemo-electro-mechanical systems that are subject to uncertainty in future loading conditions, material properties, process noise, and other design parameters.It book informs the reader of existing and new ideas that will be implemented in RUL prediction of energy systems in the future. The book provides case studies, illustrations, graphs, and charts. Its chapters consider engineering, reliability, prognostics and health management, probabilistic multibody dynamical analysis, peridynamic and finite-element modelling, computer science, and mathematics.
This book describes recent trends in growth curve modelling research in various subject areas, both theoretical and applied. It explains and explores the growth curve model as a valuable tool for gaining insights into several research topics of interest to academics and practitioners alike. The book's primary goal is to disseminate applications of the growth curve model to real-world problems, and to address related theoretical issues. The book will be of interest to a broad readership: for applied statisticians, it illustrates the importance of growth curve modelling as applied to actual field data; for more theoretically inclined statisticians, it highlights a number of theoretical issues that warrant further investigation.
The purpose of this book is to present a comprehensive account of the different definitions of stochastic integration for fBm, and to give applications of the resulting theory. Particular emphasis is placed on studying the relations between the different approaches. Readers are assumed to be familiar with probability theory and stochastic analysis, although the mathematical techniques used in the book are thoroughly exposed and some of the necessary prerequisites, such as classical white noise theory and fractional calculus, are recalled in the appendices. This book will be a valuable reference for graduate students and researchers in mathematics, biology, meteorology, physics, engineering and finance.
Machine learning is concerned with the analysis of large data and multiple variables. It is also often more sensitive than traditional statistical methods to analyze small data. The first and second volumes reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, fuzzy modeling, various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, association rule learning, anomaly detection, and correspondence analysis. This third volume addresses more advanced methods and includes subjects like evolutionary programming, stochastic methods, complex sampling, optional binning, Newton's methods, decision trees, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
The subject of the book is advanced statistical analyses for quantitative research synthesis (meta-analysis), and selected practical issues relating to research synthesis that are not covered in detail in the many existing introductory books on research synthesis (or meta-analysis). Complex statistical issues are arising more frequently as the primary research that is summarized in quantitative syntheses itself becomes more complex, and as researchers who are conducting meta-analyses become more ambitious in the questions they wish to address. Also as researchers have gained more experience in conducting research syntheses, several key issues have persisted and now appear fundamental to the enterprise of summarizing research. Specifically the book describes multivariate analyses for several indices commonly used in meta-analysis (e.g., correlations, effect sizes, proportions and/or odds ratios), will outline how to do power analysis for meta-analysis (again for each of the different kinds of study outcome indices), and examines issues around research quality and research design and their roles in synthesis. For each of the statistical topics we will examine the different possible statistical models (i.e., fixed, random, and mixed models) that could be adopted by a researcher. In dealing with the issues of study quality and research design it covers a number of specific topics that are of broad concern to research synthesists. In many fields a current issue is how to make sense of results when studies using several different designs appear in a research literature (e.g., Morris & Deshon, 1997, 2002). In education and other social sciences a critical aspect of this issue is how one might incorporate qualitative (e.g., case study) research within a synthesis. In medicine, related issues concern whether and how to summarize observational studies, and whether they should be combined with randomized controlled trials (or even if they should be combined at all). For each topic, included is a worked example (e.g., for the statistical analyses) and/or a detailed description of a published research synthesis that deals with the practical (non-statistical) issues covered.
This compilation focuses on the theory and conceptualisation of statistics and probability in the early years and the development of young children's (ages 3-10) understanding of data and chance. It provides a comprehensive overview of cutting-edge international research on the development of young learners' reasoning about data and chance in formal, informal, and non-formal educational contexts. The authors share insights into young children's statistical and probabilistic reasoning and provide early childhood educators and researchers with a wealth of illustrative examples, suggestions, and practical strategies on how to address the challenges arising from the introduction of statistical and probabilistic concepts in pre-school and school curricula. This collection will inform practices in research and teaching by providing a detailed account of current best practices, challenges, and issues, and of future trends and directions in early statistical and probabilistic learning worldwide. Further, it will contribute to future research and theory building by addressing theoretical, epistemological, and methodological considerations regarding the design of probability and statistics learning environments for young children.
Aims and Scope This book is both an introductory textbook and a research monograph on modeling the statistical structure of natural images. In very simple terms, "natural images" are photographs of the typical environment where we live. In this book, their statistical structure is described using a number of statistical models whose parameters are estimated from image samples. Our main motivation for exploring natural image statistics is computational m- eling of biological visual systems. A theoretical framework which is gaining more and more support considers the properties of the visual system to be re?ections of the statistical structure of natural images because of evolutionary adaptation processes. Another motivation for natural image statistics research is in computer science and engineering, where it helps in development of better image processing and computer vision methods. While research on natural image statistics has been growing rapidly since the mid-1990s, no attempt has been made to cover the ?eld in a single book, providing a uni?ed view of the different models and approaches. This book attempts to do just that. Furthermore, our aim is to provide an accessible introduction to the ?eld for students in related disciplines.
This book presents the R software environment as a key tool for oceanographic computations and provides a rationale for using R over the more widely-used tools of the field such as MATLAB. Kelley provides a general introduction to R before introducing the 'oce' package. This package greatly simplifies oceanographic analysis by handling the details of discipline-specific file formats, calculations, and plots. Designed for real-world application and developed with open-source protocols, oce supports a broad range of practical work. Generic functions take care of general operations such as subsetting and plotting data, while specialized functions address more specific tasks such as tidal decomposition, hydrographic analysis, and ADCP coordinate transformation. In addition, the package makes it easy to document work, because its functions automatically update processing logs stored within its data objects. Kelley teaches key R functions using classic examples from the history of oceanography, specifically the work of Alfred Redfield, Gordon Riley, J. Tuzo Wilson, and Walter Munk. Acknowledging the pervasive popularity of MATLAB, the book provides advice to users who would like to switch to R. Including a suite of real-life applications and over 100 exercises and solutions, the treatment is ideal for oceanographers, technicians, and students who want to add R to their list of tools for oceanographic analysis. |
![]() ![]() You may like...
Smart Collaborative Identifier Network…
Hongke Zhang, Wei Su, …
Hardcover
Particle-Based Methods - Fundamentals…
Eugenio Onate, Roger Owen
Hardcover
R3,048
Discovery Miles 30 480
Fuzzy Logic, Identification and…
Jairo Jose Espinosa Oviedo, Joos P.L. Vandewalle, …
Hardcover
R4,732
Discovery Miles 47 320
Portfolio Management - Delivering on…
Carl Marnewick, John Wyzalek
Paperback
R1,557
Discovery Miles 15 570
|