![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The purpose of this book is to present a comprehensive account of the different definitions of stochastic integration for fBm, and to give applications of the resulting theory. Particular emphasis is placed on studying the relations between the different approaches. Readers are assumed to be familiar with probability theory and stochastic analysis, although the mathematical techniques used in the book are thoroughly exposed and some of the necessary prerequisites, such as classical white noise theory and fractional calculus, are recalled in the appendices. This book will be a valuable reference for graduate students and researchers in mathematics, biology, meteorology, physics, engineering and finance.
Hereditary systems (or systems with either delay or after-effects)
are widely used to model processes in physics, mechanics, control,
economics and biology. An important element in their study is their
stability. Stability conditions for difference equations with delay
can be obtained using a Lyapunov functional.
This book covers applied statistics for the social sciences with upper-level undergraduate students in mind. The chapters are based on lecture notes from an introductory statistics course the author has taught for a number of years. The book integrates statistics into the research process, with early chapters covering basic philosophical issues underpinning the process of scientific research. These include the concepts of deductive reasoning and the falsifiability of hypotheses, the development of a research question and hypotheses, and the process of data collection and measurement. Probability theory is then covered extensively with a focus on its role in laying the foundation for statistical reasoning and inference. After illustrating the Central Limit Theorem, later chapters address the key, basic statistical methods used in social science research, including various z and t tests and confidence intervals, nonparametric chi square tests, one-way analysis of variance, correlation, simple regression, and multiple regression, with a discussion of the key issues involved in thinking about causal processes. Concepts and topics are illustrated using both real and simulated data. The penultimate chapter presents rules and suggestions for the successful presentation of statistics in tabular and graphic formats, and the final chapter offers suggestions for subsequent reading and study.
The Bayesian network is one of the most important architectures for representing and reasoning with multivariate probability distributions. When used in conjunction with specialized informatics, possibilities of real-world applications are achieved. Probabilistic Methods for BioInformatics explains the application of probability and statistics, in particular Bayesian networks, to genetics. This book provides background material on probability, statistics, and genetics, and then moves on to discuss Bayesian networks and applications to bioinformatics. Rather than getting bogged down in proofs and algorithms,
probabilistic methods used for biological information and Bayesian
networks are explained in an accessible way using applications and
case studies. The many useful applications of Bayesian networks
that have been developed in the past 10 years are discussed.
Forming a review of all the significant work in the field that will
arguably become the most prevalent method in biological data
analysis.
This book presents the R software environment as a key tool for oceanographic computations and provides a rationale for using R over the more widely-used tools of the field such as MATLAB. Kelley provides a general introduction to R before introducing the 'oce' package. This package greatly simplifies oceanographic analysis by handling the details of discipline-specific file formats, calculations, and plots. Designed for real-world application and developed with open-source protocols, oce supports a broad range of practical work. Generic functions take care of general operations such as subsetting and plotting data, while specialized functions address more specific tasks such as tidal decomposition, hydrographic analysis, and ADCP coordinate transformation. In addition, the package makes it easy to document work, because its functions automatically update processing logs stored within its data objects. Kelley teaches key R functions using classic examples from the history of oceanography, specifically the work of Alfred Redfield, Gordon Riley, J. Tuzo Wilson, and Walter Munk. Acknowledging the pervasive popularity of MATLAB, the book provides advice to users who would like to switch to R. Including a suite of real-life applications and over 100 exercises and solutions, the treatment is ideal for oceanographers, technicians, and students who want to add R to their list of tools for oceanographic analysis.
Probability and Stochastic Modeling not only covers all the topics found in a traditional introductory probability course, but also emphasizes stochastic modeling, including Markov chains, birth-death processes, and reliability models. Unlike most undergraduate-level probability texts, the book also focuses on increasingly important areas, such as martingales, classification of dependency structures, and risk evaluation. Numerous examples, exercises, and models using real-world data demonstrate the practical possibilities and restrictions of different approaches and help students grasp general concepts and theoretical results. The text is suitable for majors in mathematics and statistics as well as majors in computer science, economics, finance, and physics. The author offers two explicit options to teaching the material, which is reflected in "routes" designated by special "roadside" markers. The first route contains basic, self-contained material for a one-semester course. The second provides a more complete exposition for a two-semester course or self-study.
This book provides a rigorous mathematical treatment of the non-linear stochastic filtering problem using modern methods. Particular emphasis is placed on the theoretical analysis of numerical methods for the solution of the filtering problem via particle methods. The book should provide sufficient background to enable study of the recent literature. While no prior knowledge of stochastic filtering is required, readers are assumed to be familiar with measure theory, probability theory and the basics of stochastic processes. Most of the technical results that are required are stated and proved in the appendices. Exercises and solutions are included.
With an emphasis on models and techniques, this textbook introduces many of the fundamental concepts of stochastic modeling that are now a vital component of almost every scientific investigation. In particular, emphasis is placed onlaying the foundationfor solvingproblemsin reliability, insurance, finance, and credit risk. The material has been carefully selected to cover the basic concepts and techniques on each topic, making this an ideal introductory gateway to more advanced learning. With exercises and solutions to selected problems accompanying each chapter, this textbook is for a wide audience including advanced undergraduate and beginning-level graduate students, researchers, and practitioners in mathematics, statistics, engineering, and economics."
This second edition sees the light three years after the first one: too short a time to feel seriously concerned to redesign the entire book, but sufficient to be challenged by the prospect of sharpening our investigation on the working of econometric dynamic models and to be inclined to change the title of the new edition by dropping the "Topics in" of the former edition. After considerable soul searching we agreed to include several results related to topics already covered, as well as additional sections devoted to new and sophisticated techniques, which hinge mostly on the latest research work on linear matrix polynomials by the second author. This explains the growth of chapter one and the deeper insight into representation theorems in the last chapter of the book. The role of the second chapter is that of providing a bridge between the mathematical techniques in the backstage and the econometric profiles in the forefront of dynamic modelling. For this purpose, we decided to add a new section where the reader can find the stochastic rationale of vector autoregressive specifications in econometrics. The third (and last) chapter improves on that of the first edition by re- ing the fruits of the thorough analytic equipment previously drawn up."
A wide variety of processes occur on multiple scales, either naturally or as a consequence of measurement. This book contains methodology for the analysis of data that arise from such multiscale processes. The book brings together a number of recent developments and makes them accessible to a wider audience. Taking a Bayesian approach allows for full accounting of uncertainty, and also addresses the delicate issue of uncertainty at multiple scales. The Bayesian approach also facilitates the use of knowledge from prior experience or data, and these methods can handle different amounts of prior knowledge at different scales, as often occurs in practice. The book is aimed at statisticians, applied mathematicians, and engineers working on problems dealing with multiscale processes in time and/or space, such as in engineering, finance, and environmetrics. The book will also be of interest to those working on multiscale computation research. The main prerequisites are knowledge of Bayesian statistics and basic Markov chain Monte Carlo methods. A number of real-world examples are thoroughly analyzed in order to demonstrate the methods and to assist the readers in applying these methods to their own work. To further assist readers, the authors are making source code (for R) available for many of the basic methods discussed herein.
Statistical Methods in Food and Consumer Research continues to be
the only book to focus solely on the statistical techniques used in
sensory testing of foods, pharmaceuticals, cosmetics, and other
consumer products.
This book offers a practical guide to Agent Based economic modeling, adopting a "learning by doing" approach to help the reader master the fundamental tools needed to create and analyze Agent Based models. After providing them with a basic "toolkit" for Agent Based modeling, it present and discusses didactic models of real financial and economic systems in detail. While stressing the main features and advantages of the bottom-up perspective inherent to this approach, the book also highlights the logic and practical steps that characterize the model building procedure. A detailed description of the underlying codes, developed using R and C, is also provided. In addition, each didactic model is accompanied by exercises and applications designed to promote active learning on the part of the reader. Following the same approach, the book also presents several complementary tools required for the analysis and validation of the models, such as sensitivity experiments, calibration exercises, economic network and statistical distributions analysis. By the end of the book, the reader will have gained a deeper understanding of the Agent Based methodology and be prepared to use the fundamental techniques required to start developing their own economic models. Accordingly, "Economics with Heterogeneous Interacting Agents" will be of particular interest to graduate and postgraduate students, as well as to academic institutions and lecturers interested in including an overview of the AB approach to economic modeling in their courses.
An in-depth look at current issues, new research findings, and interdisciplinary exchange in survey methodology and processing Survey Measurement and Process Quality extends the marriage of traditional survey issues and continuous quality improvement further than any other contemporary volume. It documents the current state of the field, reports new research findings, and promotes interdisciplinary exchange in questionnaire design, data collection, data processing, quality assessment, and effects of errors on estimation and analysis. The book's five sections discuss a broad range of issues and topics in each of five major areas, including
Survey Measurement and Process Quality is an indispensable resource for survey practitioners and managers as well as an excellent supplemental text for undergraduate and graduate courses and special seminars.
This book describes recent trends in growth curve modelling research in various subject areas, both theoretical and applied. It explains and explores the growth curve model as a valuable tool for gaining insights into several research topics of interest to academics and practitioners alike. The book's primary goal is to disseminate applications of the growth curve model to real-world problems, and to address related theoretical issues. The book will be of interest to a broad readership: for applied statisticians, it illustrates the importance of growth curve modelling as applied to actual field data; for more theoretically inclined statisticians, it highlights a number of theoretical issues that warrant further investigation.
This textbook is the result of the enhancement of several courses on non-equilibrium statistics, stochastic processes, stochastic differential equations, anomalous diffusion and disorder. The target audience includes students of physics, mathematics, biology, chemistry, and engineering at undergraduate and graduate level with a grasp of the basic elements of mathematics and physics of the fourth year of a typical undergraduate course. The little-known physical and mathematical concepts are described in sections and specific exercises throughout the text, as well as in appendices. Physical-mathematical motivation is the main driving force for the development of this text. It presents the academic topics of probability theory and stochastic processes as well as new educational aspects in the presentation of non-equilibrium statistical theory and stochastic differential equations.. In particular it discusses the problem of irreversibility in that context and the dynamics of Fokker-Planck. An introduction on fluctuations around metastable and unstable points are given. It also describes relaxation theory of non-stationary Markov periodic in time systems. The theory of finite and infinite transport in disordered networks, with a discussion of the issue of anomalous diffusion is introduced. Further, it provides the basis for establishing the relationship between quantum aspects of the theory of linear response and the calculation of diffusion coefficients in amorphous systems.
This proceedings volume contains nine selected papers that were presented in the International Symposium in Statistics, 2012 held at Memorial University from July 16 to 18. These nine papers cover three different areas for longitudinal data analysis, four dealing with longitudinal data subject to measurement errors, four on incomplete longitudinal data analysis, and the last one for inferences for longitudinal data subject to outliers. Unlike in the independence setup, the inferences in measurement errors, missing values, and/or outlier models, are not adequately discussed in the longitudinal setup. The papers in the present volume provide details on successes and further challenges in these three areas for longitudinal data analysis. This volume is the first outlet with current research in three important areas in the longitudinal setup. The nine papers presented in three parts clearly reveal the similarities and differences in inference techniques used for three different longitudinal setups. Because the research problems considered in this volume are encountered in many real life studies in biomedical, clinical, epidemiology, socioeconomic, econometrics, and engineering fields, the volume should be useful to the researchers including graduate students in these areas.
This is a how-and-why-to-do-it book for students and scientists in all the behavioral sciences. It presents sophisticated statistical methods for analyzing continuous-time records of behavior, and integrates many recent developments in ethology, mathematical modelling, statistics, and technology. These new methods are explicitly designed to handle sequential or simultaneous acts where neither the duration nor the sequence of the acts is predetermined, which is often the case if the time scale on which behavior is studied is relatively short. The authors show how to analyze behavioral data starting with a basic model, the continuous time Markov chain. They then indicate how and when this model can be generalized and demonstrate the suitability of their approach for detecting, for example, the effects of different experimental treatments or of gradual changes in the social or physical environment. Competitive interactions such as predator-prey or host-parasite are also good subjects for this type of analysis. There are eight chapters and many worked examples, leading the reader through the mathematical processes and their applications. Students and researchers in all fields of behavioural science will find this book incomparably useful for planning and performing data analysis.
The subject of the book is advanced statistical analyses for quantitative research synthesis (meta-analysis), and selected practical issues relating to research synthesis that are not covered in detail in the many existing introductory books on research synthesis (or meta-analysis). Complex statistical issues are arising more frequently as the primary research that is summarized in quantitative syntheses itself becomes more complex, and as researchers who are conducting meta-analyses become more ambitious in the questions they wish to address. Also as researchers have gained more experience in conducting research syntheses, several key issues have persisted and now appear fundamental to the enterprise of summarizing research. Specifically the book describes multivariate analyses for several indices commonly used in meta-analysis (e.g., correlations, effect sizes, proportions and/or odds ratios), will outline how to do power analysis for meta-analysis (again for each of the different kinds of study outcome indices), and examines issues around research quality and research design and their roles in synthesis. For each of the statistical topics we will examine the different possible statistical models (i.e., fixed, random, and mixed models) that could be adopted by a researcher. In dealing with the issues of study quality and research design it covers a number of specific topics that are of broad concern to research synthesists. In many fields a current issue is how to make sense of results when studies using several different designs appear in a research literature (e.g., Morris & Deshon, 1997, 2002). In education and other social sciences a critical aspect of this issue is how one might incorporate qualitative (e.g., case study) research within a synthesis. In medicine, related issues concern whether and how to summarize observational studies, and whether they should be combined with randomized controlled trials (or even if they should be combined at all). For each topic, included is a worked example (e.g., for the statistical analyses) and/or a detailed description of a published research synthesis that deals with the practical (non-statistical) issues covered.
The use of computational methods in statistics to face complex problems and highly dimensional data, as well as the widespread availability of computer technology, is no news. The range of applications, instead, is unprecedented. As often occurs, new and complex data types require new strategies, demanding for the development of novel statistical methods and suggesting stimulating mathematical problems. This book is addressed to researchers working at the forefront of the statistical analysis of complex systems and using computationally intensive statistical methods.
This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University's Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians. After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with sections on Bayesian computation and inference. An appendix contains unique coverage of the interpretation of probability, and coverage of probability and mathematical concepts.
A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the subject, beginning with basic concepts of DOE and a review of elementary normal theory statistical methods. Subsequent chapters present a uniform, model-based approach to DOE. Each design is presented in a comprehensive format and is accompanied by a motivating example, discussion of the applicability of the design, and a model for its analysis using statistical methods such as graphical plots, analysis of variance (ANOVA), confidence intervals, and hypothesis tests. Numerous theoretical and applied exercises are provided in each chapter, and answers to selected exercises are included at the end of the book. An appendix features three case studies that illustrate the challenges often encountered in real-world experiments, such as randomization, unbalanced data, and outliers. Minitab(R) software is used to perform analyses throughout the book, and an accompanying FTP site houses additional exercises and data sets. With its breadth of real-world examples and accessible treatment of both theory and applications, Statistical Analysis of Designed Experiments is a valuable book for experimental design courses at the upper-undergraduate and graduate levels. It is also an indispensable reference for practicing statisticians, engineers, and scientists who would like to further their knowledge of DOE.
"The outstanding strengths of the book are its topic coverage, references, exposition, examples and problem sets... This book is an excellent addition to any mathematical statistician's library." -Bulletin of the American Mathematical Society In this new edition the author has added substantial material on Bayesian analysis, including lengthy new sections on such important topics as empirical and hierarchical Bayes analysis, Bayesian calculation, Bayesian communication, and group decision making. With these changes, the book can be used as a self-contained introduction to Bayesian analysis. In addition, much of the decision-theoretic portion of the text was updated, including new sections covering such modern topics as minimax multivariate (Stein) estimation.
In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management works.
This is the standard textbook for courses on probability and statistics, not substantially updated. While helping students to develop their problem-solving skills, the author motivates students with practical applications from various areas of ECE that demonstrate the relevance of probability theory to engineering practice. Included are chapter overviews, summaries, checklists of important terms, annotated references, and a wide selection of fully worked-out real-world examples. In this edition, the Computer Methods sections have been updated and substantially enhanced and new problems have been added.
I have found many thousands more readers than I ever looked for. I have no right to say to these, You shall not ?nd fault with my art, or fall asleep over my pages; but I ask you to believe that this person writing strives to tell the truth. If there is not that, there is nothing. William Makepeace Thackeray, The History of Pendennis This is a monograph/textbook on the probabilistic aspects of gambling, intended for those already familiar with probability at the post-calculus, p- measure-theory level. Gambling motivated much of the early development of probability the- 1 ory (David 1962). Indeed, some of the earliest works on probability include Girolamo Cardano's [1501-1576] Liber de Ludo Aleae (The Book on Games of Chance, written c. 1565, published 1663), Christiaan Huygens's [1629- 1695] "De ratiociniis in ludo aleae" ("On reckoning in games of chance," 1657), Jacob Bernoulli's [1654-1705]Ars Conjectandi (The Art of Conject- ing, written c. 1690, published 1713), Pierre R' emond de Montmort's [1678- 1719] Essay d'analyse sur les jeux de hasard (Analytical Essay on Games of Chance, 1708, 1713), and Abraham De Moivre's [1667-1754]TheDoctrineof Chances (1718, 1738, 1756). Gambling also had a major in?uence on 20- century probability theory, as it provided the motivation for the concept of a martingale. |
You may like...
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
Minecraft: Guide to Redstone (Updated)
Mojang AB, The Official Minecraft Team
Hardcover
R323
Discovery Miles 3 230
Intelligent Decision Making in Quality…
Cengiz Kahraman, Seda Yanik
Hardcover
Business Models and Cognition
Kristian J. Sund, Robert J. Galavan, …
Hardcover
R2,968
Discovery Miles 29 680
|