![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
The statistics profession is at a unique point in history. The
need for valid statistical tools is greater than ever; data sets
are massive, often measuring hundreds of thousands of measurements
for a single subject.The field is ready to move towards clear
objective benchmarks under which tools can be evaluated. Targeted
learning allows (1) the full generalization and utilization of
cross-validation as an estimator selection tool so that the
subjective choices made by humans are now made by the machine, and
(2) targeting the fitting of the probability distribution of the
data toward the target parameter representing the scientific
question of interest.
The purpose of this book is to present a comprehensive account of the different definitions of stochastic integration for fBm, and to give applications of the resulting theory. Particular emphasis is placed on studying the relations between the different approaches. Readers are assumed to be familiar with probability theory and stochastic analysis, although the mathematical techniques used in the book are thoroughly exposed and some of the necessary prerequisites, such as classical white noise theory and fractional calculus, are recalled in the appendices. This book will be a valuable reference for graduate students and researchers in mathematics, biology, meteorology, physics, engineering and finance.
This book proposes the formulation of an efficient methodology that estimates energy system uncertainty and predicts Remaining Useful Life (RUL) accurately with significantly reduced RUL prediction uncertainty. Renewable and non-renewable sources of energy are being used to supply the demands of societies worldwide. These sources are mainly thermo-chemo-electro-mechanical systems that are subject to uncertainty in future loading conditions, material properties, process noise, and other design parameters.It book informs the reader of existing and new ideas that will be implemented in RUL prediction of energy systems in the future. The book provides case studies, illustrations, graphs, and charts. Its chapters consider engineering, reliability, prognostics and health management, probabilistic multibody dynamical analysis, peridynamic and finite-element modelling, computer science, and mathematics.
Machine learning is concerned with the analysis of large data and multiple variables. It is also often more sensitive than traditional statistical methods to analyze small data. The first and second volumes reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, fuzzy modeling, various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, association rule learning, anomaly detection, and correspondence analysis. This third volume addresses more advanced methods and includes subjects like evolutionary programming, stochastic methods, complex sampling, optional binning, Newton's methods, decision trees, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
The subject of the book is advanced statistical analyses for quantitative research synthesis (meta-analysis), and selected practical issues relating to research synthesis that are not covered in detail in the many existing introductory books on research synthesis (or meta-analysis). Complex statistical issues are arising more frequently as the primary research that is summarized in quantitative syntheses itself becomes more complex, and as researchers who are conducting meta-analyses become more ambitious in the questions they wish to address. Also as researchers have gained more experience in conducting research syntheses, several key issues have persisted and now appear fundamental to the enterprise of summarizing research. Specifically the book describes multivariate analyses for several indices commonly used in meta-analysis (e.g., correlations, effect sizes, proportions and/or odds ratios), will outline how to do power analysis for meta-analysis (again for each of the different kinds of study outcome indices), and examines issues around research quality and research design and their roles in synthesis. For each of the statistical topics we will examine the different possible statistical models (i.e., fixed, random, and mixed models) that could be adopted by a researcher. In dealing with the issues of study quality and research design it covers a number of specific topics that are of broad concern to research synthesists. In many fields a current issue is how to make sense of results when studies using several different designs appear in a research literature (e.g., Morris & Deshon, 1997, 2002). In education and other social sciences a critical aspect of this issue is how one might incorporate qualitative (e.g., case study) research within a synthesis. In medicine, related issues concern whether and how to summarize observational studies, and whether they should be combined with randomized controlled trials (or even if they should be combined at all). For each topic, included is a worked example (e.g., for the statistical analyses) and/or a detailed description of a published research synthesis that deals with the practical (non-statistical) issues covered.
Aims and Scope This book is both an introductory textbook and a research monograph on modeling the statistical structure of natural images. In very simple terms, "natural images" are photographs of the typical environment where we live. In this book, their statistical structure is described using a number of statistical models whose parameters are estimated from image samples. Our main motivation for exploring natural image statistics is computational m- eling of biological visual systems. A theoretical framework which is gaining more and more support considers the properties of the visual system to be re?ections of the statistical structure of natural images because of evolutionary adaptation processes. Another motivation for natural image statistics research is in computer science and engineering, where it helps in development of better image processing and computer vision methods. While research on natural image statistics has been growing rapidly since the mid-1990s, no attempt has been made to cover the ?eld in a single book, providing a uni?ed view of the different models and approaches. This book attempts to do just that. Furthermore, our aim is to provide an accessible introduction to the ?eld for students in related disciplines.
This book describes recent trends in growth curve modelling research in various subject areas, both theoretical and applied. It explains and explores the growth curve model as a valuable tool for gaining insights into several research topics of interest to academics and practitioners alike. The book's primary goal is to disseminate applications of the growth curve model to real-world problems, and to address related theoretical issues. The book will be of interest to a broad readership: for applied statisticians, it illustrates the importance of growth curve modelling as applied to actual field data; for more theoretically inclined statisticians, it highlights a number of theoretical issues that warrant further investigation.
This book presents the R software environment as a key tool for oceanographic computations and provides a rationale for using R over the more widely-used tools of the field such as MATLAB. Kelley provides a general introduction to R before introducing the 'oce' package. This package greatly simplifies oceanographic analysis by handling the details of discipline-specific file formats, calculations, and plots. Designed for real-world application and developed with open-source protocols, oce supports a broad range of practical work. Generic functions take care of general operations such as subsetting and plotting data, while specialized functions address more specific tasks such as tidal decomposition, hydrographic analysis, and ADCP coordinate transformation. In addition, the package makes it easy to document work, because its functions automatically update processing logs stored within its data objects. Kelley teaches key R functions using classic examples from the history of oceanography, specifically the work of Alfred Redfield, Gordon Riley, J. Tuzo Wilson, and Walter Munk. Acknowledging the pervasive popularity of MATLAB, the book provides advice to users who would like to switch to R. Including a suite of real-life applications and over 100 exercises and solutions, the treatment is ideal for oceanographers, technicians, and students who want to add R to their list of tools for oceanographic analysis.
Since the publication of the first edition of the present volume in 1980, the stochastic stability of differential equations has become a very popular subject of research in mathematics and engineering. To date exact formulas for the Lyapunov exponent, the criteria for the moment and almost sure stability, and for the existence of stationary and periodic solutions of stochastic differential equations have been widely used in the literature. In this updated volume readers will find important new results on the moment Lyapunov exponent, stability index and some other fields, obtained after publication of the first edition, and a significantly expanded bibliography. This volume provides a solid foundation for students in graduate courses in mathematics and its applications. It is also useful for those researchers who would like to learn more about this subject, to start their research in this area or to study the properties of concrete mechanical systems subjected to random perturbations.
Statistical Methods in Food and Consumer Research continues to be
the only book to focus solely on the statistical techniques used in
sensory testing of foods, pharmaceuticals, cosmetics, and other
consumer products.
This monograph highlights the connection between the theoretical work done by research statisticians and the impact that work has on various industries. Drawing on decades of experience as an industry consultant, the author details how his contributions have had a lasting impact on the field of statistics as a whole. Aspiring statisticians and data scientists will be motivated to find practical applications for their knowledge, as they see how such work can yield breakthroughs in their field. Each chapter highlights a consulting position the author held that resulted in a significant contribution to statistical theory. Topics covered include tracking processes with change points, estimating common parameters, crossing fields with absorption points, military operations research, sampling surveys, stochastic visibility in random fields, reliability analysis, applied probability, and more. Notable advancements within each of these topics are presented by analyzing the problems facing various industries, and how solving those problems contributed to the development of the field. The Career of a Research Statistician is ideal for researchers, graduate students, or industry professionals working in statistics. It will be particularly useful for up-and-coming statisticians interested in the promising connection between academia and industry.
This book is an introduction into stochastic processes for physicists, biologists and financial analysts. Using an informal approach, all the necessary mathematical tools and techniques are covered, including the stochastic differential equations, mean values, probability distribution functions, stochastic integration and numerical modeling. Numerous examples of practical applications of the stochastic mathematics are considered in detail, ranging from physics to the financial theory. A reader with basic knowledge of the probability theory should have no difficulty in accessing the book content.
This compilation focuses on the theory and conceptualisation of statistics and probability in the early years and the development of young children's (ages 3-10) understanding of data and chance. It provides a comprehensive overview of cutting-edge international research on the development of young learners' reasoning about data and chance in formal, informal, and non-formal educational contexts. The authors share insights into young children's statistical and probabilistic reasoning and provide early childhood educators and researchers with a wealth of illustrative examples, suggestions, and practical strategies on how to address the challenges arising from the introduction of statistical and probabilistic concepts in pre-school and school curricula. This collection will inform practices in research and teaching by providing a detailed account of current best practices, challenges, and issues, and of future trends and directions in early statistical and probabilistic learning worldwide. Further, it will contribute to future research and theory building by addressing theoretical, epistemological, and methodological considerations regarding the design of probability and statistics learning environments for young children.
This proceedings volume contains nine selected papers that were presented in the International Symposium in Statistics, 2012 held at Memorial University from July 16 to 18. These nine papers cover three different areas for longitudinal data analysis, four dealing with longitudinal data subject to measurement errors, four on incomplete longitudinal data analysis, and the last one for inferences for longitudinal data subject to outliers. Unlike in the independence setup, the inferences in measurement errors, missing values, and/or outlier models, are not adequately discussed in the longitudinal setup. The papers in the present volume provide details on successes and further challenges in these three areas for longitudinal data analysis. This volume is the first outlet with current research in three important areas in the longitudinal setup. The nine papers presented in three parts clearly reveal the similarities and differences in inference techniques used for three different longitudinal setups. Because the research problems considered in this volume are encountered in many real life studies in biomedical, clinical, epidemiology, socioeconomic, econometrics, and engineering fields, the volume should be useful to the researchers including graduate students in these areas.
A wide variety of processes occur on multiple scales, either naturally or as a consequence of measurement. This book contains methodology for the analysis of data that arise from such multiscale processes. The book brings together a number of recent developments and makes them accessible to a wider audience. Taking a Bayesian approach allows for full accounting of uncertainty, and also addresses the delicate issue of uncertainty at multiple scales. The Bayesian approach also facilitates the use of knowledge from prior experience or data, and these methods can handle different amounts of prior knowledge at different scales, as often occurs in practice. The book is aimed at statisticians, applied mathematicians, and engineers working on problems dealing with multiscale processes in time and/or space, such as in engineering, finance, and environmetrics. The book will also be of interest to those working on multiscale computation research. The main prerequisites are knowledge of Bayesian statistics and basic Markov chain Monte Carlo methods. A number of real-world examples are thoroughly analyzed in order to demonstrate the methods and to assist the readers in applying these methods to their own work. To further assist readers, the authors are making source code (for R) available for many of the basic methods discussed herein.
Volume 2 offers three in-depth articles covering significant areas in applied mathematics research. Chapters feature numerous illustrations, extensive background material and technical details, and abundant examples. The authors analyze nonlinear front propagation for a large class of semilinear partial differential equations using probabilistic methods; examine wave localization phenomena in one-dimensional random media; and offer an extensive introduction to certain model equations for nonlinear wave phenomena.
In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management works.
The Equation of Knowledge: From Bayes' Rule to a Unified Philosophy of Science introduces readers to the Bayesian approach to science: teasing out the link between probability and knowledge. The author strives to make this book accessible to a very broad audience, suitable for professionals, students, and academics, as well as the enthusiastic amateur scientist/mathematician. This book also shows how Bayesianism sheds new light on nearly all areas of knowledge, from philosophy to mathematics, science and engineering, but also law, politics and everyday decision-making. Bayesian thinking is an important topic for research, which has seen dramatic progress in the recent years, and has a significant role to play in the understanding and development of AI and Machine Learning, among many other things. This book seeks to act as a tool for proselytising the benefits and limits of Bayesianism to a wider public. Features Presents the Bayesian approach as a unifying scientific method for a wide range of topics Suitable for a broad audience, including professionals, students, and academics Provides a more accessible, philosophical introduction to the subject that is offered elsewhere
An in-depth look at current issues, new research findings, and interdisciplinary exchange in survey methodology and processing Survey Measurement and Process Quality extends the marriage of traditional survey issues and continuous quality improvement further than any other contemporary volume. It documents the current state of the field, reports new research findings, and promotes interdisciplinary exchange in questionnaire design, data collection, data processing, quality assessment, and effects of errors on estimation and analysis. The book's five sections discuss a broad range of issues and topics in each of five major areas, including
Survey Measurement and Process Quality is an indispensable resource for survey practitioners and managers as well as an excellent supplemental text for undergraduate and graduate courses and special seminars.
This volume contains the proceedings of the XII Symposium of Probability and Stochastic Processes which took place at Universidad Autonoma de Yucatan in Merida, Mexico, on November 16-20, 2015. This meeting was the twelfth meeting in a series of ongoing biannual meetings aimed at showcasing the research of Mexican probabilists as well as promote new collaborations between the participants. The book features articles drawn from different research areas in probability and stochastic processes, such as: risk theory, limit theorems, stochastic partial differential equations, random trees, stochastic differential games, stochastic control, and coalescence. Two of the main manuscripts survey recent developments on stochastic control and scaling limits of Markov-branching trees, written by Kazutoshi Yamasaki and Benedicte Haas, respectively. The research-oriented manuscripts provide new advances in active research fields in Mexico. The wide selection of topics makes the book accessible to advanced graduate students and researchers in probability and stochastic processes.
This book introduces the basic methodologies for successful data analytics. Matrix optimization and approximation are explained in detail and extensively applied to dimensionality reduction by principal component analysis and multidimensional scaling. Diffusion maps and spectral clustering are derived as powerful tools. The methodological overlap between data science and machine learning is emphasized by demonstrating how data science is used for classification as well as supervised and unsupervised learning.
This book provides an overview of the application of statistical methods to problems in metrology, with emphasis on modelling measurement processes and quantifying their associated uncertainties. It covers everything from fundamentals to more advanced special topics, each illustrated with case studies from the authors' work in the Nuclear Security Enterprise (NSE). The material provides readers with a solid understanding of how to apply the techniques to metrology studies in a wide variety of contexts. The volume offers particular attention to uncertainty in decision making, design of experiments (DOEx) and curve fitting, along with special topics such as statistical process control (SPC), assessment of binary measurement systems, and new results on sample size selection in metrology studies. The methodologies presented are supported with R script when appropriate, and the code has been made available for readers to use in their own applications. Designed to promote collaboration between statistics and metrology, this book will be of use to practitioners of metrology as well as students and researchers in statistics and engineering disciplines.
This book presents statistical processes for health care delivery and covers new ideas, methods and technologies used to improve health care organizations. It gathers the proceedings of the Third International Conference on Health Care Systems Engineering (HCSE 2017), which took place in Florence, Italy from May 29 to 31, 2017. The Conference provided a timely opportunity to address operations research and operations management issues in health care delivery systems. Scientists and practitioners discussed new ideas, methods and technologies for improving the operations of health care systems, developed in close collaborations with clinicians. The topics cover a broad spectrum of concrete problems that pose challenges for researchers and practitioners alike: hospital drug logistics, operating theatre management, home care services, modeling, simulation, process mining and data mining in patient care and health care organizations.
These proceedings from the 37th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2017), held in Sao Carlos, Brazil, aim to expand the available research on Bayesian methods and promote their application in the scientific community. They gather research from scholars in many different fields who use inductive statistics methods and focus on the foundations of the Bayesian paradigm, their comparison to objectivistic or frequentist statistics counterparts, and their appropriate applications. Interest in the foundations of inductive statistics has been growing with the increasing availability of Bayesian methodological alternatives, and scientists now face much more difficult choices in finding the optimal methods to apply to their problems. By carefully examining and discussing the relevant foundations, the scientific community can avoid applying Bayesian methods on a merely ad hoc basis. For over 35 years, the MaxEnt workshops have explored the use of Bayesian and Maximum Entropy methods in scientific and engineering application contexts. The workshops welcome contributions on all aspects of probabilistic inference, including novel techniques and applications, and work that sheds new light on the foundations of inference. Areas of application in these workshops include astronomy and astrophysics, chemistry, communications theory, cosmology, climate studies, earth science, fluid mechanics, genetics, geophysics, machine learning, materials science, medical imaging, nanoscience, source separation, thermodynamics (equilibrium and non-equilibrium), particle physics, plasma physics, quantum mechanics, robotics, and the social sciences. Bayesian computational techniques such as Markov chain Monte Carlo sampling are also regular topics, as are approximate inferential methods. Foundational issues involving probability theory and information theory, as well as novel applications of inference to illuminate the foundations of physical theories, are also of keen interest. |
![]() ![]() You may like...
Let the Music Play! - Harnessing the…
Anthony M. Pellegrino, Christopher Dean Lee
Hardcover
R2,763
Discovery Miles 27 630
Neural Networks and Sea Time Series…
Brunello Tirozzi, Silvia Puca, …
Hardcover
R2,874
Discovery Miles 28 740
Adaptive Neural Network Control Of…
Sam Shuzhi Ge, Christopher J. Harris, …
Hardcover
R3,735
Discovery Miles 37 350
Probability Distributions on Banach…
N. Vakhania, Vazha Tarieladze, …
Hardcover
R3,171
Discovery Miles 31 710
Pedagogy for Conceptual Thinking and…
Masha Etkind, Uri Shafrir
Hardcover
R4,984
Discovery Miles 49 840
|