![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book provides a concise introduction to stochastic calculus with some of its applications in mathematical finance, engineering and the sciences. Applications in finance include pricing of financial derivatives, such as options on stocks, exotic options and interest rate options. The filtering problem and its solution is presented as an application in engineering. Population models and randomly perturbed equations of physics are given as examples of applications in biology and physics. Only a basic knowledge of calculus and probability is required for reading the book. The text takes the reader from a fairly low technical level to a sophisticated one gradually. Heuristic arguments are often given before precise results are stated, and many ideas are illustrated by worked-out examples. Exercises are provided at the end of chapters to help to test readers' understanding. This book is suitable for advanced undergraduate students, graduate students as well as research workers and practitioners.
This book addresses the need for a high-level analysis of unit roots and cointegration. "Time Series, Unit Roots, and Cointegration" integrates the theory of stationary sequences and issues arising in the estimation of their parameters, distributed lags, spectral density function, and cointegration. The book also includes topics that are important for understanding recent developments in the estimation and testing of cointegrated nonstationary sequences, such as Brownian motion, stochastic integration, and central limit theorems. It explores an important topic in time-series econometrics. It addresses the need for a high-level analysis of unit roots and cointegration. It is written by an excellent expositor.
Symbolic data analysis is a relatively new field that provides a
range of methods for analyzing complex datasets. Standard
statistical methods do not have the power or flexibility to make
sense of very large datasets, and symbolic data analysis techniques
have been developed in order to extract knowledge from such data.
Symbolic data methods differ from that of data mining, for example,
because rather than identifying points of interest in the data,
symbolic data methods allow the user to build models of the data
and make predictions about future events.
Our everyday life is in?uenced by many unexpected (dif?cult to predict) events usually referred as a chance. Probably, we all are as we are due to the accumulation point of a multitude of chance events. Gambling games that have been known to human beings nearly from the beginning of our civilization are based on chance events. These chance events have created the dream that everybody can easily become rich. This pursuit made gambling so popular. This book is devoted to the dynamics of the mechanical randomizers and we try to solve the problem why mechanical device (roulette) or a rigid body (a coin or a die) operating in the way described by the laws of classical mechanics can behave in such a way and produce a pseudorandom outcome. During mathematical lessons in primary school we are taught that the outcome of the coin tossing experiment is random and that the probability that the tossed coin lands heads (tails) up is equal to 1/2. Approximately, at the same time during physics lessons we are told that the motion of the rigid body (coin is an example of suchabody)isfullydeterministic. Typically,studentsarenotgiventheanswertothe question Why this duality in the interpretation of the simple mechanical experiment is possible? Trying to answer this question we describe the dynamics of the gambling games based on the coin toss, the throw of the die, and the roulette run.
This BASS book Series publishes selected high-quality papers reflecting recent advances in the design and biostatistical analysis of biopharmaceutical experiments - particularly biopharmaceutical clinical trials. The papers were selected from invited presentations at the Biopharmaceutical Applied Statistics Symposium (BASS), which was founded by the first Editor in 1994 and has since become the premier international conference in biopharmaceutical statistics. The primary aims of the BASS are: 1) to raise funding to support graduate students in biostatistics programs, and 2) to provide an opportunity for professionals engaged in pharmaceutical drug research and development to share insights into solving the problems they encounter.The BASS book series is initially divided into three volumes addressing: 1) Design of Clinical Trials; 2) Biostatistical Analysis of Clinical Trials; and 3) Pharmaceutical Applications. This book is the first of the 3-volume book series. The topics covered include: A Statistical Approach to Clinical Trial Simulations, Comparison of Statistical Analysis Methods Using Modeling and Simulation for Optimal Protocol Design, Adaptive Trial Design in Clinical Research, Best Practices and Recommendations for Trial Simulations in the Context of Designing Adaptive Clinical Trials, Designing and Analyzing Recurrent Event Data Trials, Bayesian Methodologies for Response-Adaptive Allocation, Addressing High Placebo Response in Neuroscience Clinical Trials, Phase I Cancer Clinical Trial Design: Single and Combination Agents, Sample Size and Power for the Mixed Linear Model, Crossover Designs in Clinical Trials, Data Monitoring: Structure for Clinical Trials and Sequential Monitoring Procedures, Design and Data Analysis for Multiregional Clinical Trials - Theory and Practice, Adaptive Group-Sequential Multi-regional Outcome Studies in Vaccines, Development and Validation of Patient-reported Outcomes, Interim Analysis of Survival Trials: Group Sequential Analyses, and Conditional Power - A Non-proportional Hazards Perspective.
This volume gathers selected peer-reviewed papers presented at the international conference "MAF 2016 - Mathematical and Statistical Methods for Actuarial Sciences and Finance", held in Paris (France) at the Universite Paris-Dauphine from March 30 to April 1, 2016. The contributions highlight new ideas on mathematical and statistical methods in actuarial sciences and finance. The cooperation between mathematicians and statisticians working in insurance and finance is a very fruitful field, one that yields unique theoretical models and practical applications, as well as new insights in the discussion of problems of national and international interest. This volume is addressed to academicians, researchers, Ph.D. students and professionals.
Frederick Mosteller has inspired numerous statisticians and other scientists by his creative approach to statistics and its applications. This volume brings together 40 of his most original and influential papers, capturing the variety and depth of his writings. The editors hope to share these with a new generation of researchers, so that they can build upon his insights and efforts. This volume of selected papers is a companion to the earlier volume A Statistical Model: Frederick Mosteller's Contributions to Statistics, Science, and Public Policy, edited by Stephen E. Fienberg, David C. Hoaglin, William H. Kruskal, and Judith M. Tanur (Springer-Verlag, 1990), and to Mosteller's forthcoming autobiography, which will also be published by Springer-Verlag. It includes a biography and a comprehensive bibliography of Mosteller's books, papers, and other writings. Stephen E. Fienberg is Maurice Falk University Professor of Statistics and Social Science, in the Departments of Statistics and Machine Learning at Carnegie Mellon University, Pittsburgh, PA. David C. Hoaglin is Principal Scientist at Abt Associates Inc., Cambridge, MA.
This book was written for those who need to know how to collect,
analyze and present data. It is meant to be a first course for
practitioners, a book for private study or brush-up on statistics,
and supplementary reading for general statistics classes.
In recent years probabilistic graphical models, especially Bayesian networks and decision graphs, have experienced significant theoretical development within areas such as artificial intelligence and statistics. This carefully edited monograph is a compendium of the most recent advances in the area of probabilistic graphical models such as decision graphs, learning from data and inference. It presents a survey of the state of the art of specific topics of recent interest of Bayesian Networks, including approximate propagation, abductive inferences, decision graphs, and applications of influence. In addition, Advances in Bayesian Networks presents a careful selection of applications of probabilistic graphical models to various fields such as speech recognition, meteorology or information retrieval.
Generalizability theory offers an extensive conceptual framework and a powerful set of statistical procedures for characterizing and quantifying the fallibility of measurements. It liberalizes classical test theory, in part through the application of analysis of variance procedures that focus on variance components. As such, generalizability theory is perhaps the most broadly defined measurement model currently in existence. It is applicable to virtually any scientific field that attends to measurements and their errors, and it enables a multifacteted perspective on measurement error and its components. This book provides the most comprehensive and up-to-date treatment of generalizability theory. In addition, it provides a synthesis of those parts of the statistical literature that are directly applicable to generalizability theory. The principal intended audience is measurement practitioners and graduate students in the behavioral and social sciences, although a few examples and references are provided from other fields. Readers will benefit from some familiarity with classical test theory and analysis of variance, but the treatment of most topics does not presume specific background. Robert L. Brennan is E.F. Lindquist Professor of Educational Measurement at the University of Iowa. He is an acknowledged expert in generalizability theory, has authored numerous publications on the theory, and has taught many courses and workshops on generalizability. The author has been Vice-President of the American Educational Research Association and President of the National Council on Measurement in Education (NCME). He has received NCME Awards for Outstanding Technical Contributions to Educational Measurement and Career Contributions to Educational Measurement.
Over the past decades, although stochastic system control has been
studied intensively within the field of control engineering, all
the modelling and control strategies developed so far have
concentrated on the performance of one or two output properties of
the system. such as minimum variance control and mean value
control. The general assumption used in the formulation of
modelling and control strategies is that the distribution of the
random signals involved is Gaussian. In this book, a set of new
approaches for the control of the output probability density
function of stochastic dynamic systems (those subjected to any
bounded random inputs), has been developed. In this context, the
purpose of control system design becomes the selection of a control
signal that makes the shape of the system outputs p.d.f. as close
as possible to a given distribution. The book contains material on
the subjects of: - Control of single-input single-output and
multiple-input multiple-output stochastic systems; - Stable
adaptive control of stochastic distributions; - Model reference
adaptive control; - Control of nonlinear dynamic stochastic
systems; - Condition monitoring of bounded stochastic
distributions; - Control algorithm design; - Singular stochastic
systems.
Water engineers require knowledge of stochastic, frequency concepts, uncertainty analysis, risk assessment, and the processes that predict unexpected events. This book presents the basics of stochastic, risk and uncertainty analysis, and random sampling techniques in conjunction with straightforward examples which are solved step by step. In addition, appropriate Excel functions are included as an alternative to solve the examples, and two real case studies is presented in the last chapters of book.
Graphical models----a subset of log--linear models----reveal the interrelationships between multiple variables and features of the underlying conditional independence. Following the theorem--proof--remarks format, this introduction to the use of graphical models in the description and modeling of multivariate systems covers conditional independence, several types of independence graphs, Gaussian models, issues in model selection, regression and decomposition. Many numerical examples and exercises with solutions are included.
Statistical methods have become an increasingly important and integral part of research in the health sciences. Many sophisticated methodologies have been developed for specific applications and problems. This self-contained volume, an outgrowth of an International Conference on Statistics in Health Sciences, covers a wide range of topics pertaining to new statistical methods in the health sciences. The chapters, written by leading experts in their respective fields, are thematically divided into the following areas: prognostic studies and general epidemiology, pharmacovigilance, quality of life, survival analysis, clustering, safety and efficacy assessment, clinical design, models for the environment, genomic analysis, and animal health. This comprehensive volume will serve the health science community as well as practitioners, researchers, and graduate students in applied probability, statistics, and biostatistics.
In this book, an integrated introduction to the statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments largely built upon ideas due to R.A. Fisher. After a unified review of background material (statistical methods, likelihood, data reductions, first-order asymptotics) and inference in the presence of nuisance parameters (including pseufo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference). The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with exercises, problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate courses.
The book covers the basic theory of linear regression models and presents a comprehensive survey of different estimation techniques as alternatives and complements to least squares estimation. The relationship between different estimators is clearly described and categories of estimators are worked out in detail. Proofs are given for the most relevant results, and the presented methods are illustrated with the help of numerical examples and graphics. Special emphasis is laid on the practicability, and possible applications are discussed. The book is rounded off by an introduction to the basics of decision theory and an appendix on matrix algebra.
Presenting the latest findings in topics from across the mathematical spectrum, this volume includes results in pure mathematics along with a range of new advances and novel applications to other fields such as probability, statistics, biology, and computer science. All contributions feature authors who attended the Association for Women in Mathematics Research Symposium in 2015: this conference, the third in a series of biennial conferences organized by the Association, attracted over 330 participants and showcased the research of women mathematicians from academia, industry, and government.
Although statistical design is one of the oldest branches of statistics, its importance is ever increasing, especially in the face of the data flood that often faces statisticians. It is important to recognize the appropriate design, and to understand how to effectively implement it, being aware that the default settings from a computer package can easily provide an incorrect analysis. The goal of this book is to describe the principles that drive good design, paying attention to both the theoretical background and the problems arising from real experimental situations. Designs are motivated through actual experiments, ranging from the timeless agricultural randomized complete block, to microarray experiments, which naturally lead to split plot designs and balanced incomplete blocks.
In this book, an integrated introduction to statistical inference is provided from a frequentist likelihood-based viewpoint. Classical results are presented together with recent developments, largely built upon ideas due to R.A. Fisher. The term "neo-Fisherian" highlights this.After a unified review of background material (statistical models, likelihood, data and model reduction, first-order asymptotics) and inference in the presence of nuisance parameters (including pseudo-likelihoods), a self-contained introduction is given to exponential families, exponential dispersion models, generalized linear models, and group families. Finally, basic results of higher-order asymptotics are introduced (index notation, asymptotic expansions for statistics and distributions, and major applications to likelihood inference).The emphasis is more on general concepts and methods than on regularity conditions. Many examples are given for specific statistical models. Each chapter is supplemented with problems and bibliographic notes. This volume can serve as a textbook in intermediate-level undergraduate and postgraduate courses in statistical inference.
Written by one of the top most statisticians with experience in diverse fields of applications of statistics, the book deals with the philosophical and methodological aspects of information technology, collection and analysis of data to provide insight into a problem, whether it is scientific research, policy making by government or decision making in our daily lives.The author dispels the doubts that chance is an expression of our ignorance which makes accurate prediction impossible and illustrates how our thinking has changed with quantification of uncertainty by showing that chance is no longer the obstructor but a way of expressing our knowledge. Indeed, chance can create and help in the investigation of truth. It is eloquently demonstrated with numerous examples of applications that statistics is the science, technology and art of extracting information from data and is based on a study of the laws of chance. It is highlighted how statistical ideas played a vital role in scientific and other investigations even before statistics was recognized as a separate discipline and how statistics is now evolving as a versatile, powerful and inevitable tool in diverse fields of human endeavor such as literature, legal matters, industry, archaeology and medicine.Use of statistics to the layman in improving the quality of life through wise decision making is emphasized.
This book presents practical approaches for the analysis of data from gene expression microarrays. Each chapter describes the conceptual and methodological underpinning for a statistical tool and its implementation in software. Methods cover all aspects of statistical analysis of microarrays, from annotation and filtering to clustering and classification. Chapters are written by the developers of the software. All software packages described are free to academic users. The book includes coverage of various packages that are part of the Bioconductor project and several related R tools. The materials presented cover a range of software tools designed for varied audiences. Some chapters describe simple menu-driven software in a user-friendly fashion, and are designed to be accessible to microarray data analysts without formal quantitative training. Most chapters are directed at microarray data analysts with master-level training in computer science, biostatistics or bioinformatics. A minority of more advanced chapters are intended for doctoral students and researchers. The team of editors is from the Johns Hopkins Schools of Medicine and Public Health and has been involved with developing methods and software for microarray data analysis since the inception of this technology. Giovanni Parmigiani is Associate Professor of Oncology, Pathology and Biostatistics. He is the author of the book on "Modeling in Medical decision Making," a fellow of the ASA, and a recipient of the Savage Awards for Bayesian statistics. Elizabeth S. Garrett is Assistant Professor of Oncology and Biostatistics, and recipient of the Abbey Award for statistical education. Rafael A Irizarry is Assistant Professor of Biostatistics, and recipient of the Noether Award for non-parametric statistics. Scott L. Zeger is Professor and chair of Biostatistics. He is co-author of the book "Longitudinal Data Analysis," a fellow of the ASA and recipient of the Spiegelman Award for public health statistics.
Much has happened in the field of inference and decision making during the past decade or so. This fully updated and revised third edition of Comparative Statistical Inference presents a wide ranging, balanced account of the fundamental issues across the full spectrum of inference and decision making. As in earlier editions, the material is set in a historical context to more powerfully illustrate the ideas and concepts.
Any financial asset that is openly traded has a market price. Except for extreme market conditions, market price may be more or less than a fair value. Fair value is likely to be some complicated function of the current intrinsic value of tangible or intangible assets underlying the claim and our assessment of the characteristics of the underlying assets with respect to the expected rate of growth, future dividends, volatility, and other relevant market factors. Some of these factors that affect the price can be measured at the time of a transaction with reasonably high accuracy. Most factors, however, relate to expectations about the future and to subjective issues, such as current management, corporate policies and market environment, that could affect the future financial performance of the underlying assets. Models are thus needed to describe the stochastic factors and environment, and their implementations inevitably require computational finance tools.
A unified introduction to a variety of computational algorithms for likelihood and Bayesian inference. This third edition expands the discussion of many of the techniques presented, and includes additional examples as well as exercise sets at the end of each chapter.
Modern apparatuses allow us to collect samples of functional data, mainly curves but also images. On the other hand, nonparametric statistics produces useful tools for standard data exploration. This book links these two fields of modern statistics by explaining how functional data can be studied through parameter-free statistical ideas. At the same time it shows how functional data can be studied through parameter-free statistical ideas, and offers an original presentation of new nonparametric statistical methods for functional data analysis. |
You may like...
Abelian Groups and Modules - Proceedings…
Alberto Facchini, Claudia Menini
Hardcover
R2,946
Discovery Miles 29 460
Topology and Geometric Group Theory…
Michael W. Davis, James Fowler, …
Hardcover
Pocket RuPaul Wisdom - Witty Quotes and…
Hardie Grant Books
Hardcover
(1)
Democracy Works - Re-Wiring Politics To…
Greg Mills, Olusegun Obasanjo, …
Paperback
|