Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book describes recent trends in growth curve modelling research in various subject areas, both theoretical and applied. It explains and explores the growth curve model as a valuable tool for gaining insights into several research topics of interest to academics and practitioners alike. The book's primary goal is to disseminate applications of the growth curve model to real-world problems, and to address related theoretical issues. The book will be of interest to a broad readership: for applied statisticians, it illustrates the importance of growth curve modelling as applied to actual field data; for more theoretically inclined statisticians, it highlights a number of theoretical issues that warrant further investigation.
This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications. Avoiding a "theorem-proof" format, it shows concrete applications on a variety of empirical time series. The book can be used in graduate courses in nonlinear time series and at the same time also includes interesting material for more advanced readers. Though it is largely self-contained, readers require an understanding of basic linear time series concepts, Markov chains and Monte Carlo simulation methods. The book covers time-domain and frequency-domain methods for the analysis of both univariate and multivariate (vector) time series. It makes a clear distinction between parametric models on the one hand, and semi- and nonparametric models/methods on the other. This offers the reader the option of concentrating exclusively on one of these nonlinear time series analysis methods. To make the book as user friendly as possible, major supporting concepts and specialized tables are appended at the end of every chapter. In addition, each chapter concludes with a set of key terms and concepts, as well as a summary of the main findings. Lastly, the book offers numerous theoretical and empirical exercises, with answers provided by the author in an extensive solutions manual.
Hereditary systems (or systems with either delay or after-effects)
are widely used to model processes in physics, mechanics, control,
economics and biology. An important element in their study is their
stability. Stability conditions for difference equations with delay
can be obtained using a Lyapunov functional.
This book covers applied statistics for the social sciences with upper-level undergraduate students in mind. The chapters are based on lecture notes from an introductory statistics course the author has taught for a number of years. The book integrates statistics into the research process, with early chapters covering basic philosophical issues underpinning the process of scientific research. These include the concepts of deductive reasoning and the falsifiability of hypotheses, the development of a research question and hypotheses, and the process of data collection and measurement. Probability theory is then covered extensively with a focus on its role in laying the foundation for statistical reasoning and inference. After illustrating the Central Limit Theorem, later chapters address the key, basic statistical methods used in social science research, including various z and t tests and confidence intervals, nonparametric chi square tests, one-way analysis of variance, correlation, simple regression, and multiple regression, with a discussion of the key issues involved in thinking about causal processes. Concepts and topics are illustrated using both real and simulated data. The penultimate chapter presents rules and suggestions for the successful presentation of statistics in tabular and graphic formats, and the final chapter offers suggestions for subsequent reading and study.
This book proposes the formulation of an efficient methodology that estimates energy system uncertainty and predicts Remaining Useful Life (RUL) accurately with significantly reduced RUL prediction uncertainty. Renewable and non-renewable sources of energy are being used to supply the demands of societies worldwide. These sources are mainly thermo-chemo-electro-mechanical systems that are subject to uncertainty in future loading conditions, material properties, process noise, and other design parameters.It book informs the reader of existing and new ideas that will be implemented in RUL prediction of energy systems in the future. The book provides case studies, illustrations, graphs, and charts. Its chapters consider engineering, reliability, prognostics and health management, probabilistic multibody dynamical analysis, peridynamic and finite-element modelling, computer science, and mathematics.
This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance." The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, such as out-of-the-box regression, cannot help in discovering cause. This new way of looking at uncertainty ties together disparate fields - probability, physics, biology, the "soft" sciences, computer science - because each aims at discovering cause (of effects). It broadens the understanding beyond frequentist and Bayesian methods to propose a Third Way of modeling.
This compilation focuses on the theory and conceptualisation of statistics and probability in the early years and the development of young children's (ages 3-10) understanding of data and chance. It provides a comprehensive overview of cutting-edge international research on the development of young learners' reasoning about data and chance in formal, informal, and non-formal educational contexts. The authors share insights into young children's statistical and probabilistic reasoning and provide early childhood educators and researchers with a wealth of illustrative examples, suggestions, and practical strategies on how to address the challenges arising from the introduction of statistical and probabilistic concepts in pre-school and school curricula. This collection will inform practices in research and teaching by providing a detailed account of current best practices, challenges, and issues, and of future trends and directions in early statistical and probabilistic learning worldwide. Further, it will contribute to future research and theory building by addressing theoretical, epistemological, and methodological considerations regarding the design of probability and statistics learning environments for young children.
Modern Actuarial Risk Theory contains what every actuary needs to know about non-life insurance mathematics. It starts with the standard material like utility theory, individual and collective model and basic ruin theory. Other topics are risk measures and premium principles, bonus-malus systems, ordering of risks and credibility theory. It also contains some chapters about Generalized Linear Models, applied to rating and IBNR problems. As to the level of the mathematics, the book would fit in a bachelors or masters program in quantitative economics or mathematical statistics. This second and much expanded edition emphasizes the implementation of these techniques through the use of R. This free but incredibly powerful software is rapidly developing into the de facto standard for statistical computation, not just in academic circles but also in practice. With R, one can do simulations, find maximum likelihood estimators, compute distributions by inverting transforms, and much more.
Sensor Data Fusion is the process of combining incomplete and imperfect pieces of mutually complementary sensor information in such a way that a better understanding of an underlying real-world phenomenon is achieved. Typically, this insight is either unobtainable otherwise or a fusion result exceeds what can be produced from a single sensor output in accuracy, reliability, or cost. This book provides an introduction Sensor Data Fusion, as an information technology as well as a branch of engineering science and informatics. Part I presents a coherent methodological framework, thus providing the prerequisites for discussing selected applications in Part II of the book. The presentation mirrors the author's views on the subject and emphasizes his own contributions to the development of particular aspects. With some delay, Sensor Data Fusion is likely to develop along lines similar to the evolution of another modern key technology whose origin is in the military domain, the Internet. It is the author's firm conviction that until now, scientists and engineers have only scratched the surface of the vast range of opportunities for research, engineering, and product development that still waits to be explored: the Internet of the Sensors.
With an emphasis on models and techniques, this textbook introduces many of the fundamental concepts of stochastic modeling that are now a vital component of almost every scientific investigation. In particular, emphasis is placed onlaying the foundationfor solvingproblemsin reliability, insurance, finance, and credit risk. The material has been carefully selected to cover the basic concepts and techniques on each topic, making this an ideal introductory gateway to more advanced learning. With exercises and solutions to selected problems accompanying each chapter, this textbook is for a wide audience including advanced undergraduate and beginning-level graduate students, researchers, and practitioners in mathematics, statistics, engineering, and economics."
This book provides a rigorous mathematical treatment of the non-linear stochastic filtering problem using modern methods. Particular emphasis is placed on the theoretical analysis of numerical methods for the solution of the filtering problem via particle methods. The book should provide sufficient background to enable study of the recent literature. While no prior knowledge of stochastic filtering is required, readers are assumed to be familiar with measure theory, probability theory and the basics of stochastic processes. Most of the technical results that are required are stated and proved in the appendices. Exercises and solutions are included.
This book is devoted to the scientific legacy of Professor Victor Ambartsumian - one of the distinguished scientists of the last century. He obtained very essential results not only in astrophysics, but also in mathematics and theoretical physics. One can recall his fundamental results concerning the Sturm-Liouville inverse problem, quantum field theory, structure of atomic nuclei etc. Nevertheless, his revolutionary ideas in astrophysics and corresponding results are known more widely and have predetermined the further development of this science. The concept about the activity phenomena and objects' evolution, particularly, determination of the age of our Galaxy, ideas about the stars' formation nowadays in stellar associations, the activity of galactic nuclei appeared to be exceptionally fruitful. These directions are being elaborated at many astronomical centers all over the world.
This second edition sees the light three years after the first one: too short a time to feel seriously concerned to redesign the entire book, but sufficient to be challenged by the prospect of sharpening our investigation on the working of econometric dynamic models and to be inclined to change the title of the new edition by dropping the "Topics in" of the former edition. After considerable soul searching we agreed to include several results related to topics already covered, as well as additional sections devoted to new and sophisticated techniques, which hinge mostly on the latest research work on linear matrix polynomials by the second author. This explains the growth of chapter one and the deeper insight into representation theorems in the last chapter of the book. The role of the second chapter is that of providing a bridge between the mathematical techniques in the backstage and the econometric profiles in the forefront of dynamic modelling. For this purpose, we decided to add a new section where the reader can find the stochastic rationale of vector autoregressive specifications in econometrics. The third (and last) chapter improves on that of the first edition by re- ing the fruits of the thorough analytic equipment previously drawn up."
by S. Geisser.- Fisher, R.A. (1922) On the Mathematical Foundations of Theoretical Statistics.- by T.W. Anderson.- Hotelling, H. (1931) The Generalization of Student's Ratio.- by E.L. Lehmann.- Neyman, J. and Pearson, E.S. (1933) On the Problem of the Most Efficient Tests of Statistical Hypotheses.- by D.A.S. Fraser.- by D.A.S. Fraser.- by R.E. Barlow.- de Finetti, B. (1937) Foresight: It's Logical Laws, Its Subjective Sources.- by M.R. Leadbetter.- Cramer, H. (1942) On Harmonic Analysis in Certain Functional Spaces.- by R.L. Smith.- Gnedenko, B.V. (1943) On the Limiting Distribution of the Maximum Term in a Random Series.- by P.K. Pathak.- Rao, C.R. (1945) Information and the Accuracy Attainable in the Estimation of Statistical Parameters.- by B.K. Ghosh.- Wald, A. (1945) Sequential Tests of Statistical Hypotheses.- by P.K. Sen.- Hoeffding, W. (1948) A Class of Statistics with Asymptotically Normal Distribution.- by L. Weiss.- Wald, A. (1949) Statistical Decision Functions.- by D.V. Lindley.- by D.V. Lindley.- by I.J. Good.- Robbins, H.E. (1955) An Empirical Bayes Approach to Statistics.- by H.P. Wynn.- Kiefer, J.C. (1959) Optimum Experimental Designs.- by B. Efron.- by B. Efron.- by J.F. BjTHrnstad.- Birnbaum, A. (1962) On the Foundations of Statistical Inference.- by W.U. DuMouchel.- Edwards, W., Lindman, H., and Savage, L.J. (1963) Bayesian Statistical Inference for Psychological Research.- by N. Reid.- Fraser, D.A.S. (1966) Structural Probability and a Generalization.- by J. de Leeuw.- Akaike, H. (1973) Information Theory and an Extension of the Maximum Likelihood Principle.
The papers in this volume are based on lectures given at the IMA Workshop on Grid Generation and Adaptive Algorithms held during April 28 - May 2, 1997. Grid generation is a common feature of many computational tasks which require the discretization and representation of space and surfaces. The papers in this volume discuss how the geometric complexity of the physical object or the non-uniform nature of the solution variable make it impossible to use a uniform grid. Since an efficient grid requires knowledge of the computed solution, many of the papers in this volume treat how to construct grids that are adaptively computed with the solution. This volume will be of interest to computational scientists and mathematicians working in a broad variety of applications including fluid mechanics, solid mechanics, materials science, chemistry, and physics. Papers treat residual-based error estimation and adaptivity, repartitioning and load balancing for adaptive meshes, data structures and local refinement methods for conservation laws, adaptivity for hp-finite element methods, the resolution of boundary layers in high Reynolds number flow, adaptive methods for elastostatic contact problems, the full domain partition approach to parallel adaptive refinement, the adaptive solution of phase change problems, and quality indicators for triangular meshes.
This volume contains the proceedings of the XII Symposium of Probability and Stochastic Processes which took place at Universidad Autonoma de Yucatan in Merida, Mexico, on November 16-20, 2015. This meeting was the twelfth meeting in a series of ongoing biannual meetings aimed at showcasing the research of Mexican probabilists as well as promote new collaborations between the participants. The book features articles drawn from different research areas in probability and stochastic processes, such as: risk theory, limit theorems, stochastic partial differential equations, random trees, stochastic differential games, stochastic control, and coalescence. Two of the main manuscripts survey recent developments on stochastic control and scaling limits of Markov-branching trees, written by Kazutoshi Yamasaki and Benedicte Haas, respectively. The research-oriented manuscripts provide new advances in active research fields in Mexico. The wide selection of topics makes the book accessible to advanced graduate students and researchers in probability and stochastic processes.
The statistics profession is at a unique point in history. The
need for valid statistical tools is greater than ever; data sets
are massive, often measuring hundreds of thousands of measurements
for a single subject.The field is ready to move towards clear
objective benchmarks under which tools can be evaluated. Targeted
learning allows (1) the full generalization and utilization of
cross-validation as an estimator selection tool so that the
subjective choices made by humans are now made by the machine, and
(2) targeting the fitting of the probability distribution of the
data toward the target parameter representing the scientific
question of interest.
Machine learning is concerned with the analysis of large data and multiple variables. It is also often more sensitive than traditional statistical methods to analyze small data. The first and second volumes reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, fuzzy modeling, various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, association rule learning, anomaly detection, and correspondence analysis. This third volume addresses more advanced methods and includes subjects like evolutionary programming, stochastic methods, complex sampling, optional binning, Newton's methods, decision trees, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
This book presents statistical processes for health care delivery and covers new ideas, methods and technologies used to improve health care organizations. It gathers the proceedings of the Third International Conference on Health Care Systems Engineering (HCSE 2017), which took place in Florence, Italy from May 29 to 31, 2017. The Conference provided a timely opportunity to address operations research and operations management issues in health care delivery systems. Scientists and practitioners discussed new ideas, methods and technologies for improving the operations of health care systems, developed in close collaborations with clinicians. The topics cover a broad spectrum of concrete problems that pose challenges for researchers and practitioners alike: hospital drug logistics, operating theatre management, home care services, modeling, simulation, process mining and data mining in patient care and health care organizations.
This proceedings volume contains nine selected papers that were presented in the International Symposium in Statistics, 2012 held at Memorial University from July 16 to 18. These nine papers cover three different areas for longitudinal data analysis, four dealing with longitudinal data subject to measurement errors, four on incomplete longitudinal data analysis, and the last one for inferences for longitudinal data subject to outliers. Unlike in the independence setup, the inferences in measurement errors, missing values, and/or outlier models, are not adequately discussed in the longitudinal setup. The papers in the present volume provide details on successes and further challenges in these three areas for longitudinal data analysis. This volume is the first outlet with current research in three important areas in the longitudinal setup. The nine papers presented in three parts clearly reveal the similarities and differences in inference techniques used for three different longitudinal setups. Because the research problems considered in this volume are encountered in many real life studies in biomedical, clinical, epidemiology, socioeconomic, econometrics, and engineering fields, the volume should be useful to the researchers including graduate students in these areas.
The subject of the book is advanced statistical analyses for quantitative research synthesis (meta-analysis), and selected practical issues relating to research synthesis that are not covered in detail in the many existing introductory books on research synthesis (or meta-analysis). Complex statistical issues are arising more frequently as the primary research that is summarized in quantitative syntheses itself becomes more complex, and as researchers who are conducting meta-analyses become more ambitious in the questions they wish to address. Also as researchers have gained more experience in conducting research syntheses, several key issues have persisted and now appear fundamental to the enterprise of summarizing research. Specifically the book describes multivariate analyses for several indices commonly used in meta-analysis (e.g., correlations, effect sizes, proportions and/or odds ratios), will outline how to do power analysis for meta-analysis (again for each of the different kinds of study outcome indices), and examines issues around research quality and research design and their roles in synthesis. For each of the statistical topics we will examine the different possible statistical models (i.e., fixed, random, and mixed models) that could be adopted by a researcher. In dealing with the issues of study quality and research design it covers a number of specific topics that are of broad concern to research synthesists. In many fields a current issue is how to make sense of results when studies using several different designs appear in a research literature (e.g., Morris & Deshon, 1997, 2002). In education and other social sciences a critical aspect of this issue is how one might incorporate qualitative (e.g., case study) research within a synthesis. In medicine, related issues concern whether and how to summarize observational studies, and whether they should be combined with randomized controlled trials (or even if they should be combined at all). For each topic, included is a worked example (e.g., for the statistical analyses) and/or a detailed description of a published research synthesis that deals with the practical (non-statistical) issues covered.
The purpose of this book is to present a comprehensive account of the different definitions of stochastic integration for fBm, and to give applications of the resulting theory. Particular emphasis is placed on studying the relations between the different approaches. Readers are assumed to be familiar with probability theory and stochastic analysis, although the mathematical techniques used in the book are thoroughly exposed and some of the necessary prerequisites, such as classical white noise theory and fractional calculus, are recalled in the appendices. This book will be a valuable reference for graduate students and researchers in mathematics, biology, meteorology, physics, engineering and finance.
This book is an introduction into stochastic processes for physicists, biologists and financial analysts. Using an informal approach, all the necessary mathematical tools and techniques are covered, including the stochastic differential equations, mean values, probability distribution functions, stochastic integration and numerical modeling. Numerous examples of practical applications of the stochastic mathematics are considered in detail, ranging from physics to the financial theory. A reader with basic knowledge of the probability theory should have no difficulty in accessing the book content.
These proceedings from the 37th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2017), held in Sao Carlos, Brazil, aim to expand the available research on Bayesian methods and promote their application in the scientific community. They gather research from scholars in many different fields who use inductive statistics methods and focus on the foundations of the Bayesian paradigm, their comparison to objectivistic or frequentist statistics counterparts, and their appropriate applications. Interest in the foundations of inductive statistics has been growing with the increasing availability of Bayesian methodological alternatives, and scientists now face much more difficult choices in finding the optimal methods to apply to their problems. By carefully examining and discussing the relevant foundations, the scientific community can avoid applying Bayesian methods on a merely ad hoc basis. For over 35 years, the MaxEnt workshops have explored the use of Bayesian and Maximum Entropy methods in scientific and engineering application contexts. The workshops welcome contributions on all aspects of probabilistic inference, including novel techniques and applications, and work that sheds new light on the foundations of inference. Areas of application in these workshops include astronomy and astrophysics, chemistry, communications theory, cosmology, climate studies, earth science, fluid mechanics, genetics, geophysics, machine learning, materials science, medical imaging, nanoscience, source separation, thermodynamics (equilibrium and non-equilibrium), particle physics, plasma physics, quantum mechanics, robotics, and the social sciences. Bayesian computational techniques such as Markov chain Monte Carlo sampling are also regular topics, as are approximate inferential methods. Foundational issues involving probability theory and information theory, as well as novel applications of inference to illuminate the foundations of physical theories, are also of keen interest. |
You may like...
Stochastic Processes and Their…
Christo Ananth, N. Anbazhagan, …
Hardcover
R7,041
Discovery Miles 70 410
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,549
Discovery Miles 25 490
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,284
Discovery Miles 22 840
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,411
Discovery Miles 54 110
|