![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called season .
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, "An Elementary Introduction to Statistical Learning Theory" is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference. Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting. Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study. "An Elementary Introduction to Statistical Learning Theory" is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.
On May 27-31, 1985, a series of symposia was held at The University of Western Ontario, London, Canada, to celebrate the 70th birthday of Pro fessor V. M. Joshi. These symposia were chosen to reflect Professor Joshi's research interests as well as areas of expertise in statistical science among faculty in the Departments of Statistical and Actuarial Sciences, Economics, Epidemiology and Biostatistics, and Philosophy. From these symposia, the six volumes which comprise the "Joshi Festschrift" have arisen. The 117 articles in this work reflect the broad interests and high quality of research of those who attended our conference. We would like to thank alI of the contributors for their superb cooperation in helping us to complete this project. Our deepest gratitude must go to the three people who have spent so much of their time in the past year typing these volumes: Jackie BeU, Lise Constant, and Sandy Tamowski. This work has been printed from "camera ready" copy produced by our Vax 785 computer and QMS Lasergraphix printers, using the text processing software TEX. At the initiation of this project, we were neophytes in the use of this system. Thank you, J ackie, Lise, and Sandy, for having the persistence and dedication needed to complete this undertaking."
This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based prognosis, residual storage life prognosis, and prognostic information-based decision-making.
Criticism is the habitus of the contemplative intellect, whereby we try to recognize with probability the genuine quality of a l- erary work by using appropriate aids and rules. In so doing, c- tain general and particular points must be considered. The art of interpretation or hermeneutics is the habitus of the contemplative intellect of probing into the sense of somewhat special text by using logical rules and suitable means. Note : Hermeneutics differs from criticism as the part does from the whole. Antonius Gvilielmus Amo Afer (1727) There is no such thing as absolute truth. At best it is a subj- tive criterion, but one based upon valuation. Unfortunately, too many people place their fate in the hands of subjective without properly evaluating it. Arnold A. Kaufmann and Madan M. Gupta The development of cost benefit analysis and the theory of fuzzy decision was divided into two inter-dependent structures of identification and measurement theory on one hand and fuzzy value theory one the other. Each of them has sub-theories that constitute a complete logical system.
This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued."
This book explains harmonisation techniques that can be used in survey research to align national systems of categories and definitions in such a way that comparison is possible across countries and cultures. It provides an introduction to instruments for collecting internationally comparable data of interest to survey researchers. It shows how seven key demographic and socio-economic variables can be harmonised and employed in European comparative surveys. The seven key variables discussed in detail are: education, occupation, income, activity status, private household, ethnicity, and family. These demographic and socio-economic variables are background variables that no survey can do without. They frequently have the greatest explanatory capacity to analyse social structures, and are a mirror image of the way societies are organised nationally. This becomes readily apparent when one attempts, for example, to compare national education systems. Moreover, a comparison of the national definitions of concepts such as "private household" reveals several different historically and culturally shaped underlying concepts. Indeed, some European countries do not even have a word for "private household". Hence such national definitions and categories cannot simply be translated from one culture to another. They must be harmonised.
On May 27-31, 1985, a series of symposia was held at The University of Western Ontario, London, Canada, to celebrate the 70th birthday of Pro fessor V. M. Joshi. These symposia were chosen to reflect Professor Joshi's research interests as well as areas of expertise in statistical science among faculty in the Departments of Statistical and Actuarial Sciences, Economics, Epidemiology and Biostatistics, and Philosophy. From these symposia, the six volumes which comprise the "Joshi Festschrift" have arisen. The 117 articles in this work reflect the broad interests and high quality of research of those who attended our conference. We would like to thank all of the contributors for their superb cooperation in helping us to complete this project. Our deepest gratitude must go to the three people who have spent so much of their time in the past year typing these volumes: Jackie Bell, Lise Constant, and Sandy Tarnowski. This work has been printed from "camera ready" copy produced by our Vax 785 computer and QMS Lasergraphix printers, using the text processing software TEX. At the initiation of this project, we were neophytes in the use of this system. Thank you, Jackie, Lise, and Sandy, for having the persistence and dedication needed to complete this undertaking."
When I wrote the book Quantitative Sociodynamics, it was an early attempt to make methods from statistical physics and complex systems theory fruitful for the modeling and understanding of social phenomena. Unfortunately, the ?rst edition appeared at a quite prohibitive price. This was one reason to make these chapters available again by a new edition. The other reason is that, in the meantime, many of the methods discussed in this book are more and more used in a variety of different ?elds. Among the ideas worked out in this book are: 1 * a statistical theory of binary social interactions, * a mathematical formulation of social ?eld theory, which is the basis of social 2 force models, * a microscopic foundation of evolutionary game theory, based on what is known today as 'proportional imitation rule', a stochastic treatment of interactions in evolutionary game theory, and a model for the self-organization of behavioral 3 conventions in a coordination game. It, therefore, appeared reasonable to make this book available again, but at a more affordable price. To keep its original character, the translation of this book, which 1 D. Helbing, Interrelations between stochastic equations for systems with pair interactions. Ph- icaA 181, 29-52 (1992); D. Helbing, Boltzmann-like and Boltzmann-Fokker-Planck equations as a foundation of behavioral models. PhysicaA 196, 546-573 (1993). 2 D. Helbing, Boltzmann-like and Boltzmann-Fokker-Planck equations as a foundation of beh- ioral models. PhysicaA 196, 546-573 (1993); D.
This book chronicles Donald Burkholder's thirty-five year study of
martingales and its consequences. Here are some of the
highlights.
"Mathematics of Uncertainty" provides the basic ideas and foundations of uncertainty, covering the fields of mathematics in which uncertainty, variability, imprecision and fuzziness of data are of importance. This introductory book describes the basic ideas of the mathematical fields of uncertainty from simple interpolation to wavelets, from error propagation to fuzzy sets and neural networks. The book presents the treatment of problems of interpolation and approximation, as well as observation fuzziness which can essentially influence the preciseness and reliability of statements on functional relationships. The notions of randomness and probability are examined as a model for the variability of observation and measurement results. Besides these basic ideas the book also presents methods of qualitative data analysis such as cluster analysis and classification, and of evaluation of functional relationships such as regression analysis and quantitative fuzzy data analysis.
"Decision Systems and Non-stochastic Randomness" is the first systematic presentation and mathematical formalization (including existence theorems) of the statistical regularities of non-stochastic randomness. The results presented in this book extend the capabilities of probability theory by providing mathematical techniques that allow for the description of uncertain events that do not fit standard stochastic models. The book demonstrates how non-stochastic regularities can be incorporated into decision theory and information theory, offering an alternative to the subjective probability approach to uncertainty and the unified approach to the measurement of information. This book is intended for statisticians, mathematicians, engineers, economists or other researchers interested in non-stochastic modeling and decision theory.
This book deals with methods to evaluate scientific productivity. In the book statistical methods, deterministic and stochastic models and numerous indexes are discussed that will help the reader to understand the nonlinear science dynamics and to be able to develop or construct systems for appropriate evaluation of research productivity and management of research groups and organizations. The dynamics of science structures and systems is complex, and the evaluation of research productivity requires a combination of qualitative and quantitative methods and measures. The book has three parts. The first part is devoted to mathematical models describing the importance of science for economic growth and systems for the evaluation of research organizations of different size. The second part contains descriptions and discussions of numerous indexes for the evaluation of the productivity of researchers and groups of researchers of different size (up to the comparison of research productivities of research communities of nations). Part three contains discussions of non-Gaussian laws connected to scientific productivity and presents various deterministic and stochastic models of science dynamics and research productivity. The book shows that many famous fat tail distributions as well as many deterministic and stochastic models and processes, which are well known from physics, theory of extreme events or population dynamics, occur also in the description of dynamics of scientific systems and in the description of the characteristics of research productivity. This is not a surprise as scientific systems are nonlinear, open and dissipative.
In the theory of random processes the term 'ergodicity' has a wide variety of meanings. In the theory of stationary processes ergodicity is often identified with metric transitivity. In the theory of Markov processes, the word ergodic is applied to theorems of both the existence of transition probability limits and on the convergence of mean value ratios of these transition probabilities. In addition, there are also 'ergodic theorems' on the convergence of distributions of shifted random processes. In this monograph, the term 'ergodic' is understood in its original sense, i.e. the one it had when it was first adopted by the theory of random processes from statistical mechanics and Boltzmann's theory of gases. In this book, an ergodic theorem refers to any statement about the existence of a mean value with respect to trajectories of a random process taken with respect to time. The author takes the view that problems of the existence of time means, and their equality to phase means, are interesting without any assumptions about the distribution of the random process.
This volume presents an eclectic mix of original research articles in areas covering the analysis of ordered data, stochastic modeling and biostatistics. These areas were featured in a conference held at the University of Texas at Dallas from March 7 to 9, 2014 in honor of Professor H. N. Nagaraja's 60th birthday and his distinguished contributions to statistics. The articles were written by leading experts who were invited to contribute to the volume from among the conference participants. The volume is intended for all researchers with an interest in order statistics, distribution theory, analysis of censored data, stochastic modeling, time series analysis, and statistical methods for the health sciences, including statistical genetics.
This book presents the latest findings on network theory and agent-based modeling of economic and financial phenomena. In this context, the economy is depicted as a complex system consisting of heterogeneous agents that interact through evolving networks; the aggregate behavior of the economy arises out of billions of small-scale interactions that take place via countless economic agents. The book focuses on analytical modeling, and on the econometric and statistical analysis of the properties emerging from microscopic interactions. In particular, it highlights the latest empirical and theoretical advances, helping readers understand economic and financial networks, as well as new work on modeling behavior using rich, agent-based frameworks. Innovatively, the book combines observational and theoretical insights in the form of networks and agent-based models, both of which have proved to be extremely valuable in understanding non-linear and evolving complex systems. Given its scope, the book will capture the interest of graduate students and researchers from various disciplines (e.g. economics, computer science, physics, and applied mathematics) whose work involves the domain of complexity theory.
This book of problems has been designed to accompany an undergraduate course in probability. The only prerequisite is basic algebra and calculus. Each chapter is divided into three parts: Problems, Hints, and Solutions. To make the book self-contained all problem sections include expository material. Definitions and statements of important results are interlaced with relevant problems. The problems have been selected to motivate abstract definitions by concrete examples and to lead in manageable steps towards general results, as well as to provide exercises based on the issues and techniques introduced in each chapter. The book is intended as a challenge to involve students as active participants in the course.
Intended for both researchers and practitioners, this book will be a valuable resource for studying and applying recent robust statistical methods. It contains up-to-date research results in the theory of robust statistics Treats computational aspects and algorithms and shows interesting and new applications.
The interaction between mathematicians, statisticians and econometricians working in actuarial sciences and finance is producing numerous meaningful scientific results. This volume introduces new ideas, in the form of four-page papers, presented at the international conference Mathematical and Statistical Methods for Actuarial Sciences and Finance (MAF), held at Universidad Carlos III de Madrid (Spain), 4th-6th April 2018. The book covers a wide variety of subjects in actuarial science and financial fields, all discussed in the context of the cooperation between the three quantitative approaches. The topics include: actuarial models; analysis of high frequency financial data; behavioural finance; carbon and green finance; credit risk methods and models; dynamic optimization in finance; financial econometrics; forecasting of dynamical actuarial and financial phenomena; fund performance evaluation; insurance portfolio risk analysis; interest rate models; longevity risk; machine learning and soft-computing in finance; management in insurance business; models and methods for financial time series analysis, models for financial derivatives; multivariate techniques for financial markets analysis; optimization in insurance; pricing; probability in actuarial sciences, insurance and finance; real world finance; risk management; solvency analysis; sovereign risk; static and dynamic portfolio selection and management; trading systems. This book is a valuable resource for academics, PhD students, practitioners, professionals and researchers, and is also of interest to other readers with quantitative background knowledge.
A Levy process is a continuous-time analogue of a random walk, and as such, is at the cradle of modern theories of stochastic processes. Martingales, Markov processes, and diffusions are extensions and generalizations of these processes. In the past, representatives of the Levy class were considered most useful for applications to either Brownian motion or the Poisson process. Nowadays the need for modeling jumps, bursts, extremes and other irregular behavior of phenomena in nature and society has led to a renaissance of the theory of general Levy processes. Researchers and practitioners in fields as diverse as physics, meteorology, statistics, insurance, and finance have rediscovered the simplicity of Levy processes and their enormous flexibility in modeling tails, dependence and path behavior. This volume, with an excellent introductory preface, describes the state-of-the-art of this rapidly evolving subject with special emphasis on the non-Brownian world. Leading experts present surveys of recent developments, or focus on some most promising applications. Despite its special character, every topic is aimed at the non- specialist, keen on learning about the new exciting face of a rather aged class of processes. An extensive bibliography at the end of each article makes this an invaluable comprehensive reference text. For the researcher and graduate student, every article contains open problems and points out directions for futurearch. The accessible nature of the work makes this an ideal introductory text for graduate seminars in applied probability, stochastic processes, physics, finance, and telecommunications, and a unique guide to the world of Levy processes. "
Statistical inferential methods are widely used in the study of various physical, biological, social, and other phenomena. Parametric estimation is one such method. Although there are many books which consider problems of statistical point estimation, this volume is the first to be devoted solely to the problem of unbiased estimation. It contains three chapters dealing, respectively, with the theory of point statistical estimation, techniques for constructing unbiased estimators, and applications of unbiased estimation theory. These chapters are followed by a comprehensive appendix which classifies and lists, in the form of tables, all known results relating to unbiased estimators of parameters for univariate distributions. About one thousand minimum variance unbiased estimators are listed. The volume also contains numerous examples and exercises. This volume will serve as a handbook on point unbiased estimation for researchers whose work involves statistics. It can also be recommended as a supplementary text for graduate students.
This textbook is an approachable introduction to statistical analysis using matrix algebra. Prior knowledge of matrix algebra is not necessary. Advanced topics are easy to follow through analyses that were performed on an open-source spreadsheet using a few built-in functions. These topics include ordinary linear regression, as well as maximum likelihood estimation, matrix decompositions, nonparametric smoothers and penalized cubic splines. Each data set (1) contains a limited number of observations to encourage readers to do the calculations themselves, and (2) tells a coherent story based on statistical significance and confidence intervals. In this way, students will learn how the numbers were generated and how they can be used to make cogent arguments about everyday matters. This textbook is designed for use in upper level undergraduate courses or first year graduate courses. The first chapter introduces students to linear equations, then covers matrix algebra, focusing on three essential operations: sum of squares, the determinant, and the inverse. These operations are explained in everyday language, and their calculations are demonstrated using concrete examples. The remaining chapters build on these operations, progressing from simple linear regression to mediational models with bootstrapped standard errors.
This volume contains the Proceedings of the Advanced Symposium on Multivariate Modeling and Data Analysis held at the 64th Annual Heeting of the Virginia Academy of Sciences (VAS)--American Statistical Association's Vir ginia Chapter at James Madison University in Harrisonburg. Virginia during Hay 15-16. 1986. This symposium was sponsored by financial support from the Center for Advanced Studies at the University of Virginia to promote new and modern information-theoretic statist ical modeling procedures and to blend these new techniques within the classical theory. Multivariate statistical analysis has come a long way and currently it is in an evolutionary stage in the era of high-speed computation and computer technology. The Advanced Symposium was the first to address the new innovative approaches in multi variate analysis to develop modern analytical and yet practical procedures to meet the needs of researchers and the societal need of statistics. vii viii PREFACE Papers presented at the Symposium by e1l11lJinent researchers in the field were geared not Just for specialists in statistics, but an attempt has been made to achieve a well balanced and uniform coverage of different areas in multi variate modeling and data analysis. The areas covered included topics in the analysis of repeated measurements, cluster analysis, discriminant analysis, canonical cor relations, distribution theory and testing, bivariate densi ty estimation, factor analysis, principle component analysis, multidimensional scaling, multivariate linear models, nonparametric regression, etc."
This book presents a large variety of extensions of the methods of inclusion and exclusion. Both methods for generating and methods for proof of such inequalities are discussed. The inequalities are utilized for finding asymptotic values and for limit theorems. Applications vary from classical probability estimates to modern extreme value theory and combinatorial counting to random subset selection. Applications are given in prime number theory, growth of digits in different algorithms, and in statistics such as estimates of confidence levels of simultaneous interval estimation. The prerequisites include the basic concepts of probability theory and familiarity with combinatorial arguments.
|
![]() ![]() You may like...
How To Identify Trees In South Africa
Braam van Wyk, Piet Van Wyk
Paperback
Kirstenbosch - The Most Beautiful Garden…
Brian J. Huntley
Hardcover
|