![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Probability and Stochastic Modeling not only covers all the topics found in a traditional introductory probability course, but also emphasizes stochastic modeling, including Markov chains, birth-death processes, and reliability models. Unlike most undergraduate-level probability texts, the book also focuses on increasingly important areas, such as martingales, classification of dependency structures, and risk evaluation. Numerous examples, exercises, and models using real-world data demonstrate the practical possibilities and restrictions of different approaches and help students grasp general concepts and theoretical results. The text is suitable for majors in mathematics and statistics as well as majors in computer science, economics, finance, and physics. The author offers two explicit options to teaching the material, which is reflected in "routes" designated by special "roadside" markers. The first route contains basic, self-contained material for a one-semester course. The second provides a more complete exposition for a two-semester course or self-study.
The use of computational methods in statistics to face complex problems and highly dimensional data, as well as the widespread availability of computer technology, is no news. The range of applications, instead, is unprecedented. As often occurs, new and complex data types require new strategies, demanding for the development of novel statistical methods and suggesting stimulating mathematical problems. This book is addressed to researchers working at the forefront of the statistical analysis of complex systems and using computationally intensive statistical methods.
This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University's Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians. After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with sections on Bayesian computation and inference. An appendix contains unique coverage of the interpretation of probability, and coverage of probability and mathematical concepts.
A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the subject, beginning with basic concepts of DOE and a review of elementary normal theory statistical methods. Subsequent chapters present a uniform, model-based approach to DOE. Each design is presented in a comprehensive format and is accompanied by a motivating example, discussion of the applicability of the design, and a model for its analysis using statistical methods such as graphical plots, analysis of variance (ANOVA), confidence intervals, and hypothesis tests. Numerous theoretical and applied exercises are provided in each chapter, and answers to selected exercises are included at the end of the book. An appendix features three case studies that illustrate the challenges often encountered in real-world experiments, such as randomization, unbalanced data, and outliers. Minitab(R) software is used to perform analyses throughout the book, and an accompanying FTP site houses additional exercises and data sets. With its breadth of real-world examples and accessible treatment of both theory and applications, Statistical Analysis of Designed Experiments is a valuable book for experimental design courses at the upper-undergraduate and graduate levels. It is also an indispensable reference for practicing statisticians, engineers, and scientists who would like to further their knowledge of DOE.
In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management works.
This is the standard textbook for courses on probability and statistics, not substantially updated. While helping students to develop their problem-solving skills, the author motivates students with practical applications from various areas of ECE that demonstrate the relevance of probability theory to engineering practice. Included are chapter overviews, summaries, checklists of important terms, annotated references, and a wide selection of fully worked-out real-world examples. In this edition, the Computer Methods sections have been updated and substantially enhanced and new problems have been added.
I have found many thousands more readers than I ever looked for. I have no right to say to these, You shall not ?nd fault with my art, or fall asleep over my pages; but I ask you to believe that this person writing strives to tell the truth. If there is not that, there is nothing. William Makepeace Thackeray, The History of Pendennis This is a monograph/textbook on the probabilistic aspects of gambling, intended for those already familiar with probability at the post-calculus, p- measure-theory level. Gambling motivated much of the early development of probability the- 1 ory (David 1962). Indeed, some of the earliest works on probability include Girolamo Cardano's [1501-1576] Liber de Ludo Aleae (The Book on Games of Chance, written c. 1565, published 1663), Christiaan Huygens's [1629- 1695] "De ratiociniis in ludo aleae" ("On reckoning in games of chance," 1657), Jacob Bernoulli's [1654-1705]Ars Conjectandi (The Art of Conject- ing, written c. 1690, published 1713), Pierre R' emond de Montmort's [1678- 1719] Essay d'analyse sur les jeux de hasard (Analytical Essay on Games of Chance, 1708, 1713), and Abraham De Moivre's [1667-1754]TheDoctrineof Chances (1718, 1738, 1756). Gambling also had a major in?uence on 20- century probability theory, as it provided the motivation for the concept of a martingale.
The concise yet authoritative presentation of key techniques for basic mixtures experiments Inspired by the author's bestselling advanced book on the topic, A Primer on Experiments with Mixtures provides an introductory presentation of the key principles behind experimenting with mixtures. Outlining useful techniques through an applied approach with examples from real research situations, the book supplies a comprehensive discussion of how to design and set up basic mixture experiments, then analyze the data and draw inferences from results. Drawing from his extensive experience teaching the topic at various levels, the author presents the mixture experiments in an easy-to-follow manner that is void of unnecessary formulas and theory. Succinct presentations explore key methods and techniques for carrying out basic mixture experiments, including: * Designs and models for exploring the entire simplex factor space, with coverage of simplex-lattice and simplex-centroid designs, canonical polynomials, the plotting of individual residuals, and axial designs * Multiple constraints on the component proportions in the form of lower and/or upper bounds, introducing L-Pseudocomponents, multicomponent constraints, and multiple lattice designs for major and minor component classifications * Techniques for analyzing mixture data such as model reduction and screening components, as well as additional topics such as measuring the leverage of certain design points * Models containing ratios of the components, Cox's mixture polynomials, and the fitting of a slack variable model * A review of least squares and the analysis of variance for fitting data Each chapter concludes with a summary and appendices with details on the technical aspects of the material. Throughout the book, exercise sets with selected answers allow readers to test their comprehension of the material, and References and Recommended Reading sections outline further resources for study of the presented topics. A Primer on Experiments with Mixtures is an excellent book for one-semester courses on mixture designs and can also serve as a supplement for design of experiments courses at the upper-undergraduate and graduate levels. It is also a suitable reference for practitioners and researchers who have an interest in experiments with mixtures and would like to learn more about the related mixture designs and models.
Mathematical epidemiology of infectious diseases usually involves describing the flow of individuals between mutually exclusive infection states. One of the key parameters describing the transition from the susceptible to the infected class is the hazard of infection, often referred to as the force of infection. The force of infection reflects the degree of contact with potential for transmission between infected and susceptible individuals. The mathematical relation between the force of infection and effective contact patterns is generally assumed to be subjected to the mass action principle, which yields the necessary information to estimate the basic reproduction number, another key parameter in infectious disease epidemiology. It is within this context that the Center for Statistics (CenStat, I-Biostat, Hasselt University) and the Centre for the Evaluation of Vaccination and the Centre for Health Economic Research and Modelling Infectious Diseases (CEV, CHERMID, Vaccine and Infectious Disease Institute, University of Antwerp) have collaborated over the past 15 years. This book demonstrates the past and current research activities of these institutes and can be considered to be a milestone in this collaboration. This book is focused on the application of modern statistical methods and models to estimate infectious disease parameters. We want to provide the readers with software guidance, such as R packages, and with data, as far as they can be made publicly available.
This book presents some recent developments in correlated data analysis. It utilizes the class of dispersion models as marginal components in the formulation of joint models for correlated data. This enables the book to handle a broader range of data types than those analyzed by traditional generalized linear models. One example is correlated angular data. This book provides a systematic treatment for the topic of estimating functions. Under this framework, both generalized estimating equations (GEE) and quadratic inference functions (QIF) are studied as special cases. In addition to marginal models and mixed-effects models, this book covers topics on joint regression analysis based on Gaussian copulas and generalized state space models for longitudinal data from long time series. Various real-world data examples, numerical illustrations and software usage tips are presented throughout the book. This book has evolved from lecture notes on longitudinal data analysis, and may be considered suitable as a textbook for a graduate course on correlated data analysis. This book is inclined more towards technical details regarding the underlying theory and methodology used in software-based applications. Therefore, the book will serve as a useful reference for those who want theoretical explanations to puzzles arising from data analyses or deeper understanding of underlying theory related to analyses.
Nature didn't design human beings to be statisticians, and in fact our minds are more naturally attuned to spotting the saber-toothed tiger than seeing the jungle he springs from. Yet scienti?c discovery in practice is often more jungle than tiger. Those of us who devote our scienti?c lives to the deep and satisfying subject of statistical inference usually do so in the face of a certain under-appreciation from the public, and also (though less so these days) from the wider scienti?c world. With this in mind, it feels very nice to be over-appreciated for a while, even at the expense of weathering a 70th birthday. (Are we certain that some terrible chronological error hasn't been made?) Carl Morris and Rob Tibshirani, the two colleagues I've worked most closely with, both 't my ideal pro?le of the statistician as a mathematical scientist working seamlessly across wide areas of theory and application. They seem to have chosen the papers here in the same catholic spirit, and then cajoled an all-star cast of statistical savants to comment on them.
Combines recent developments in resampling technology (including the bootstrap) with new methods for multiple testing that are easy to use, convenient to report and widely applicable. Software from SAS Institute is available to execute many of the methods and programming is straightforward for other applications. Explains how to summarize results using adjusted p -values which do not necessitate cumbersome table look-ups. Demonstrates how to incorporate logical constraints among hypotheses, further improving power.
This volume contains the proceedings of the XII Symposium of Probability and Stochastic Processes which took place at Universidad Autonoma de Yucatan in Merida, Mexico, on November 16-20, 2015. This meeting was the twelfth meeting in a series of ongoing biannual meetings aimed at showcasing the research of Mexican probabilists as well as promote new collaborations between the participants. The book features articles drawn from different research areas in probability and stochastic processes, such as: risk theory, limit theorems, stochastic partial differential equations, random trees, stochastic differential games, stochastic control, and coalescence. Two of the main manuscripts survey recent developments on stochastic control and scaling limits of Markov-branching trees, written by Kazutoshi Yamasaki and Benedicte Haas, respectively. The research-oriented manuscripts provide new advances in active research fields in Mexico. The wide selection of topics makes the book accessible to advanced graduate students and researchers in probability and stochastic processes.
This monograph focuses on the construction of regression models with linear and non-linear constrain inequalities from the theoretical point of view. Unlike previous publications, this volume analyses the properties of regression with inequality constrains, investigating the flexibility of inequality constrains and their ability to adapt in the presence of additional a priori information The implementation of inequality constrains improves the accuracy of models, and decreases the likelihood of errors. Based on the obtained theoretical results, a computational technique for estimation and prognostication problems is suggested. This approach lends itself to numerous applications in various practical problems, several of which are discussed in detail The book is useful resource for graduate students, PhD students, as well as for researchers who specialize in applied statistics and optimization. This book may also be useful to specialists in other branches of applied mathematics, technology, econometrics and finance
Probabilistic Methods for Financial and Marketing Informatics aims to provide students with insights and a guide explaining how to apply probabilistic reasoning to business problems. Rather than dwelling on rigor, algorithms, and proofs of theorems, the authors concentrate on showing examples and using the software package Netica to represent and solve problems. The book contains unique coverage of probabilistic reasoning topics applied to business problems, including marketing, banking, operations management, and finance. It shares insights about when and why probabilistic methods can and cannot be used effectively. This book is recommended for all R&D professionals and students who are involved with industrial informatics, that is, applying the methodologies of computer science and engineering to business or industry information. This includes computer science and other professionals in the data management and data mining field whose interests are business and marketing information in general, and who want to apply AI and probabilistic methods to their problems in order to better predict how well a product or service will do in a particular market, for instance. Typical fields where this technology is used are in advertising, venture capital decision making, operational risk measurement in any industry, credit scoring, and investment science.
This book lays the foundations for a theory on almost periodic stochastic processes and their applications to various stochastic differential equations, functional differential equations with delay, partial differential equations, and difference equations. It is in part a sequel of authors recent work on almost periodic stochastic difference and differential equations and has the particularity to be the first book that is entirely devoted to almost periodic random processes and their applications. The topics treated in it range from existence, uniqueness, and stability of solutions for abstract stochastic difference and differential equations.
Analyzing observed or measured data is an important step in applied sciences. The recent increase in computer capacity has resulted in a revolution both in data collection and data analysis. An increasing number of scientists, researchers and students are venturing into statistical data analysis; hence the need for more guidance in this field, which was previously dominated mainly by statisticians. This handbook fills the gap in the range of textbooks on data analysis. Written in a dictionary format, it will serve as a comprehensive reference book in a rapidly growing field. However, this book is more structured than an ordinary dictionary, where each entry is a separate, self-contained entity. The authors provide not only definitions and short descriptions, but also offer an overview of the different topics. Therefore, the handbook can also be used as a companion to textbooks for undergraduate or graduate courses. 1700 entries are given in alphabetical order grouped into 20 topics and each topic is organized in a hierarchical fashion. Additional specific entries on a topic can be easily found by following the cross-references in a top-down manner. Several figures and tables are provided to enhance the comprehension of the topics and a list of acronyms helps to locate the full terminologies. The bibliography offers suggestions for further reading.
This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications. Avoiding a "theorem-proof" format, it shows concrete applications on a variety of empirical time series. The book can be used in graduate courses in nonlinear time series and at the same time also includes interesting material for more advanced readers. Though it is largely self-contained, readers require an understanding of basic linear time series concepts, Markov chains and Monte Carlo simulation methods. The book covers time-domain and frequency-domain methods for the analysis of both univariate and multivariate (vector) time series. It makes a clear distinction between parametric models on the one hand, and semi- and nonparametric models/methods on the other. This offers the reader the option of concentrating exclusively on one of these nonlinear time series analysis methods. To make the book as user friendly as possible, major supporting concepts and specialized tables are appended at the end of every chapter. In addition, each chapter concludes with a set of key terms and concepts, as well as a summary of the main findings. Lastly, the book offers numerous theoretical and empirical exercises, with answers provided by the author in an extensive solutions manual.
These proceedings from the 37th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2017), held in Sao Carlos, Brazil, aim to expand the available research on Bayesian methods and promote their application in the scientific community. They gather research from scholars in many different fields who use inductive statistics methods and focus on the foundations of the Bayesian paradigm, their comparison to objectivistic or frequentist statistics counterparts, and their appropriate applications. Interest in the foundations of inductive statistics has been growing with the increasing availability of Bayesian methodological alternatives, and scientists now face much more difficult choices in finding the optimal methods to apply to their problems. By carefully examining and discussing the relevant foundations, the scientific community can avoid applying Bayesian methods on a merely ad hoc basis. For over 35 years, the MaxEnt workshops have explored the use of Bayesian and Maximum Entropy methods in scientific and engineering application contexts. The workshops welcome contributions on all aspects of probabilistic inference, including novel techniques and applications, and work that sheds new light on the foundations of inference. Areas of application in these workshops include astronomy and astrophysics, chemistry, communications theory, cosmology, climate studies, earth science, fluid mechanics, genetics, geophysics, machine learning, materials science, medical imaging, nanoscience, source separation, thermodynamics (equilibrium and non-equilibrium), particle physics, plasma physics, quantum mechanics, robotics, and the social sciences. Bayesian computational techniques such as Markov chain Monte Carlo sampling are also regular topics, as are approximate inferential methods. Foundational issues involving probability theory and information theory, as well as novel applications of inference to illuminate the foundations of physical theories, are also of keen interest.
In April 2007, the Deutsche Forschungsgemeinschaft (DFG) approved the Priority Program 1324 "Mathematical Methods for Extracting Quantifiable Information from Complex Systems." This volume presents a comprehensive overview of the most important results obtained over the course of the program. Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance. Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges. Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as well as the development of new and efficient numerical algorithms were among the main goals of this Priority Program. The treatment of high-dimensional systems is clearly one of the most challenging tasks in applied mathematics today. Since the problem of high-dimensionality appears in many fields of application, the above-mentioned synergy and cross-fertilization effects were expected to make a great impact. To be truly successful, the following issues had to be kept in mind: theoretical research and practical applications had to be developed hand in hand; moreover, it has proven necessary to combine different fields of mathematics, such as numerical analysis and computational stochastics. To keep the whole program sufficiently focused, we concentrated on specific but related fields of application that share common characteristics and as such, they allowed us to use closely related approaches.
This Handbook covers latent variable models, which are a flexible
class of models for modeling multivariate data to explore
relationships among observed and latent variables.
This book covers the method of metric distances and its application in probability theory and other fields. The method is fundamental in the study of limit theorems and generally in assessing the quality of approximations to a given probabilistic model. The method of metric distances is developed to study stability problems and reduces to the selection of an ideal or the most appropriate metric for the problem under consideration and a comparison of probability metrics. After describing the basic structure of probability metrics and providing an analysis of the topologies in the space of probability measures generated by different types of probability metrics, the authors study stability problems by providing a characterization of the ideal metrics for a given problem and investigating the main relationships between different types of probability metrics. The presentation is provided in a general form, although specific cases are considered as they arise in the process of finding supplementary bounds or in applications to important special cases. Svetlozar T. Rachev is the Frey Family Foundation Chair of Quantitative Finance, Department of Applied Mathematics and Statistics, SUNY-Stony Brook and Chief Scientist of Finanlytica, USA. Lev B. Klebanov is a Professor in the Department of Probability and Mathematical Statistics, Charles University, Prague, Czech Republic. Stoyan V. Stoyanov is a Professor at EDHEC Business School and Head of Research, EDHEC-Risk Institute-Asia (Singapore). Frank J. Fabozzi is a Professor at EDHEC Business School. (USA)
Limit theorems and asymptotic results form a central topic in probability theory and mathematical statistics. New and non-classical limit theorems have been discovered for processes in random environments, especially in connection with random matrix theory and free probability. These questions and the techniques for answering them combine asymptotic enumerative combinatorics, particle systems and approximation theory, and are important for new approaches in geometric and metric number theory as well. Thus, the contributions in this book include a wide range of applications with surprising connections ranging from longest common subsequences for words, permutation groups, random matrices and free probability to entropy problems and metric number theory. The book is the product of a conference that took place in August 2011 in Bielefeld, Germany to celebrate the 60th birthday of Friedrich Gotze, a noted expert in this field."
Mean field approximation has been adopted to describe macroscopic phenomena from microscopic overviews. It is still in progress; fluid mechanics, gauge theory, plasma physics, quantum chemistry, mathematical oncology, non-equilibirum thermodynamics. spite of such a wide range of scientific areas that are concerned with the mean field theory, a unified study of its mathematical structure has not been discussed explicitly in the open literature. The benefit of this point of view on nonlinear problems should have significant impact on future research, as will be seen from the underlying features of self-assembly or bottom-up self-organization which is to be illustrated in a unified way. The aim of this book is to formulate the variational and hierarchical aspects of the equations that arise in the mean field theory from macroscopic profiles to microscopic principles, from dynamics to equilibrium, and from biological models to models that arise from chemistry and physics.
This book brings together important topics of current research in probabilistic graphical modeling, learning from data and probabilistic inference. Coverage includes such topics as the characterization of conditional independence, the learning of graphical models with latent variables, and extensions to the influence diagram formalism as well as important application fields, such as the control of vehicles, bioinformatics and medicine. |
You may like...
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,469
Discovery Miles 54 690
Theory of Games and Economic Behavior
John Von Neumann, Oskar Morgenstern
Hardcover
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
A Compendium of the Census of…
Massachusetts Bureau of Statistics O
Hardcover
R889
Discovery Miles 8 890
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
|