![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The classical optimal control theory deals with the determination of an optimal control that optimizes the criterion subjects to the dynamic constraint expressing the evolution of the system state under the influence of control variables. If this is extended to the case of multiple controllers (also called players) with different and sometimes conflicting optimization criteria (payoff function) it is possible to begin to explore differential games. Zero-sum differential games, also called differential games of pursuit, constitute the most developed part of differential games and are rigorously investigated. In this book, the full theory of differential games of pursuit with complete and partial information is developed. Numerous concrete pursuit-evasion games are solved ("life-line" games, simple pursuit games, etc.), and new time-consistent optimality principles in the n-person differential game theory are introduced and investigated.
The question of what environmental statistics is about is particularly important when it comes to the formulation of relevant research and training, whether in academia, agencies, or industries. This volume aims to give a new perception on the subject with some examples that are of concern and interest today. Environmental statistics is in a take-off stage both for reasons of societal challenge and statistical opportunity, and is demanding more and more from non-traditional and innovative statistical approaches. The chapters in this volume, which are specially prepared by several outstanding professionals involved in statistics and the environment, discuss the current state of the art in diverse areas of environmental statistics. The volume provides new perspectives and problems for future research, training, policy and regulation. It will be valuable to researchers, teachers, consultants and graduate students in statistics, environmental statistics, statistical ecology, and quantitative environmental sciences in academia, industries, governmental agencies, laboratories and libraries.
This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is "white," i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides simple and suitable parameter estimation methods in these models, making it a valuable resource for all researchers in this field. The book is addressed to specialists and researchers in the theory and statistics of stochastic processes, practitioners who apply statistical methods of parameter estimation, graduate and post-graduate students who study mathematical modeling and statistics.
A critical yet constructive description of the rich analytical techniques and substantive applications that typify how statistical thinking has been applied at the RAND Corporation over the past two decades. Case studies of public policy problems are useful for teaching because they are familiar: almost everyone knows something abut health insurance, global warming, and capital punishment, to name but a few of the applications covered in this casebook. Each case study has a common format that describes the policy questions, the statistical questions, and the successful and the unsuccessful analytic strategies. Readers should be familiar with basic statistical concepts including sampling and regression. While designed for statistics courses in areas ranging from economics to health policy to the law at both the advanced undergraduate and graduate levels, empirical researchers and policy-makers will also find this casebook informative.
Lagrangian expansions can be used to obtain numerous very useful probability models, which have been applied to real life situations including, but not limited to branching processes, queuing processes, stochastic processes, environmental toxicology, diffusion of information, ecology, strikes in industries, sales of new products, and amount of production for optimum profits. This book is a comprehensive, systematic treatment of the two classes of Lagrangian probability distributions along with some of their sub-families and their properties; important applications are also given.Graduate students and researchers interested in Lagrangian probability distributions, who have sound knowledge of standard statistical techniques, will find this book valuable. It may be used as a reference text or in courses and seminars on distribution theory and Lagrangian distributions. Applied scientists and researchers in environmental statistics, reliability, sales management, epidemiology, operations research, and the optimization of profits in manufacturing and marketing will benefit immensely from the various applications in the book.
Quantum Probability and Related Topics is a series of volumes based on material discussed at the various QP conferences. It aims to provide an update on the rapidly growing field of classical probability, quantum physics and functional analysis.
The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning as a general problem of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: * the setting of learning problems based on the model of minimizing the risk functional from empirical data * a comprehensive analysis of the empirical risk minimization principle including necessary and sufficient conditions for its consistency * non-asymptotic bounds for the risk achieved using the empirical risk minimization principle * principles for controlling the generalization ability of learning machines using small sample sizes based on these bounds * the Support Vector methods that control the generalization ability when estimating function using small sample size. The second edition of the book contains three new chapters devoted to further development of the learning theory and SVM techniques. These include: * the theory of direct method of learning based on solving multidimensional integral equations for density, conditional probability, and conditional density estimation * a new inductive principle of learning. Written in a readable and concise style, the book is intended for statisticians, mathematicians, physicists, and computer scientists. Vladimir N. Vapnik is Technology Leader AT&T Labs-Research and Professor of London University. He is one of the founders of statistical learning theory, and the author of seven books published in English, Russian, German, and Chinese.
This book deals with the impact of uncertainty in input data on the
outputs of mathematical models. Uncertain inputs as scalars,
tensors, functions, or domain boundaries are considered. In
practical terms, material parameters or constitutive laws, for
instance, are uncertain, and quantities as local temperature, local
mechanical stress, or local displacement are monitored. The goal of
the worst scenario method is to extremize the quantity over the set
of uncertain input data.
The subject theory is important in finance, economics, investment strategies, health sciences, environment, industrial engineering, etc.
This monograph presents methods for full comparative distributional analysis based on the relative distribution. This provides a general integrated framework for analysis, a graphical component that simplifies exploratory data analysis and display, a statistically valid basis for the development of hypothesis-driven summary measures, and the potential for decomposition - enabling the examination of complex hypotheses regarding the origins of distributional changes within and between groups. Written for data analysts and those interested in measurement, the text can also serve as a textbook for a course on distributional methods.
This text takes readers in a clear and progressive format from simple to recent and advanced topics in pure and applied probability such as contraction and annealed properties of non-linear semi-groups, functional entropy inequalities, empirical process convergence, increasing propagations of chaos, central limit, and Berry Esseen type theorems as well as large deviation principles for strong topologies on path-distribution spaces. Topics also include a body of powerful branching and interacting particle methods.
Trees are a fundamental object in graph theory and combinatorics as well as a basic object for data structures and algorithms in computer science. During thelastyearsresearchrelatedto(random)treeshasbeenconstantlyincreasing and several asymptotic and probabilistic techniques have been developed in order to describe characteristics of interest of large trees in di?erent settings. Thepurposeofthisbookistoprovideathoroughintroductionintovarious aspects of trees in randomsettings anda systematic treatment ofthe involved mathematicaltechniques. It shouldserveasa referencebookaswellasa basis for future research. One major conceptual aspect is to connect combinatorial and probabilistic methods that range from counting techniques (generating functions, bijections) over asymptotic methods (singularity analysis, saddle point techniques) to various sophisticated techniques in asymptotic probab- ity (convergence of stochastic processes, martingales). However, the reading of the book requires just basic knowledge in combinatorics, complex analysis, functional analysis and probability theory of master degree level. It is also part of concept of the book to provide full proofs of the major results even if they are technically involved and lengthy.
Stochastic analysis is a field of mathematical research having numerous interactions with other domains of mathematics such as partial differential equations, riemannian path spaces, dynamical systems, optimization. It also has many links with applications in engineering, finance, quantum physics, and other fields. This book covers recent and diverse aspects of stochastic and infinite-dimensional analysis. The included papers are written from a variety of standpoints (white noise analysis, Malliavin calculus, quantum stochastic calculus) by the contributors, and provide a broad coverage of the subject. This volume will be useful to graduate students and research mathematicians wishing to get acquainted with recent developments in the field of stochastic analysis.
Survival data or more general time-to-event data occur in many areas, including medicine, biology, engineering, economics, and demography, but previously standard methods have requested that all time variables are univariate and independent. This book extends the field by allowing for multivariate times. Applications where such data appear are survival of twins, survival of married couples and families, time to failure of right and left kidney for diabetic patients, life history data with time to outbreak of disease, complications and death, recurrent episodes of diseases and cross-over studies with time responses. As the field is rather new, the concepts and the possible types of data are described in detail and basic aspects of how dependence can appear in such data is discussed. Four different approaches to the analysis of such data are presented. The multi-state models where a life history is described as the subject moving from state to state is the most classical approach. The Markov models make up an important special case, but it is also described how easily more general models are set up and analyzed. Frailty models, which are random effects models for survival data, made a second approach, extending from the most simple shared frailty models, which are considered in detail, to models with more complicated dependence structures over individuals or over time. Marginal modelling has become a popular approach to evaluate the effect of explanatory factors in the presence of dependence, but without having specified a statistical model for the dependence. Finally, the completely non-parametric approach to bivariate censored survival data is described. This book is aimed at investigators who need to analyze multivariate survival data, but due to its focus on the concepts and the modelling aspects, it is also useful for persons interested in such data, but not having a statistical education. It can be used as a textbook for a graduate course in multivariate survival data. It is made from an applied point of view and covers all essential aspects of applying multivariate survival models. Also more theoretical evaluations, like asymptotic theory, are described, but only to the extent useful in applications and for understanding the models. For reading the book, it is useful, but not necessary, to have an understanding of univariate survival data. Philip Hougaard is a statistician at the pharmaceutical company Novo Nordisk. He has a Ph.D. in nonlinear regression models and is Doctor of Science based on a thesis on frailty models. He is associate editor of Biometrics and Lifetime Data Analysis. He has published over 80 papers in the statistical and medical literature.
Copulas are functions that join multivariate distribution functions to their one-dimensional margins. The study of copulas and their role in statistics is a new but vigorously growing field. In this book the student or practitioner of statistics and probability will find discussions of the fundamental properties of copulas and some of their primary applications. The applications include the study of dependence and measures of association, and the construction of families of bivariate distributions. With nearly a hundred examples and over 150 exercises, this book is suitable as a text or for self-study. The only prerequisite is an upper level undergraduate course in probability and mathematical statistics, although some familiarity with nonparametric statistics would be useful. Knowledge of measure-theoretic probability is not required. Roger B. Nelsen is Professor of Mathematics at Lewis & Clark College in Portland, Oregon. He is also the author of "Proofs Without Words: Exercises in Visual Thinking," published by the Mathematical Association of America.
This contributed volume comprises research articles and reviews on topics connected to the mathematical modeling of cellular systems. These contributions cover signaling pathways, stochastic effects, cell motility and mechanics, pattern formation processes, as well as multi-scale approaches. All authors attended the workshop on "Modeling Cellular Systems" which took place in Heidelberg in October 2014. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.
The purpose of this book is to honor the fundamental contributions to many different areas of statistics made by Barry Arnold. Distinguished and active researchers highlight some of the recent developments in statistical distribution theory, order statistics and their properties, as well as inferential methods associated with them. Applications to survival analysis, reliability, quality control, and environmental problems are emphasized.
This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in
Based on materials discussed in the various quantum probability conferences, this text aims to provide an update on the rapidly growing field of classical probability, quantum physics and functional analysis. This book is intended to be used by mathematicians and includes chapters on the lattice of admissable partitions, weak coupling and low density limits in terms of squeezed vectors and photon limits and macroscopic quasi particle spectrum for the BCS-model.
Statistical Analysis of Observations of Increasing Dimension is devoted to the investigation of the limit distribution of the empirical generalized variance, covariance matrices, their eigenvalues and solutions of the system of linear algebraic equations with random coefficients, which are an important function of observations in multidimensional statistical analysis. A general statistical analysis is developed in which observed random vectors may not have density and their components have an arbitrary dependence structure. The methods of this theory have very important advantages in comparison with existing methods of statistical processing. The results have applications in nuclear and statistical physics, multivariate statistical analysis in the theory of the stability of solutions of stochastic differential equations, in control theory of linear stochastic systems, in linear stochastic programming, in the theory of experiment planning.
Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in "ordinal" data. As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust. Non-parametric methods have many popular applications, and are widely used in research in the fields of the behavioral sciences and biomedicine. This is a textbook on non-parametric statistics for applied research. The authors propose to use a realistic yet mostly fictional situation and series of dialogues to illustrate in detail the statistical processes required to complete data analysis. This book draws on a readers existing elementary knowledge of statistical analyses to broaden his/her research capabilities. The material within the book is covered in such a way that someone with a very limited knowledge of statistics would be able to read and understand the concepts detailed in the text. The "real world" scenario to be presented involves a multidisciplinary team of behavioral, medical, crime analysis, and policy analysis professionals work together to answer specific empirical questions regarding real-world applied problems. The reader is introduced to the team and the data set, and through the course of the text follows the team as they progress through the decision making process of narrowing the data and the research questions to answer the applied problem. In this way, abstract statistical concepts are translated into concrete and specific language. This text uses one data set from which all examples are taken. This is radically different from other statistics books which provide a varied array of examples and data sets. Using only one data set facilitates reader-directed teaching and learning by providing multiple research questions which are integrated rather than using disparate examples and completely unrelated research questions and data.
Numerical methods in finance have emerged as a vital field at the crossroads of probability theory, finance and numerical analysis. Based on presentations given at the workshop Numerical Methods in Finance held at the INRIA Bordeaux (France) on June 1-2, 2010, this book provides an overview of the major new advances in the numerical treatment of instruments with American exercises. Naturally it covers the most recent research on the mathematical theory and the practical applications of optimal stopping problems as they relate to financial applications. By extension, it also provides an original treatment of Monte Carlo methods for the recursive computation of conditional expectations and solutions of BSDEs and generalized multiple optimal stopping problems and their applications to the valuation of energy derivatives and assets. The articles were carefully written in a pedagogical style and a reasonably self-contained manner. The book is geared toward quantitative analysts, probabilists, and applied mathematicians interested in financial applications.
This book deals with the theory and the applications of a new time domain, termed natural time domain, that has been forwarded by the authors almost a decade ago (P.A. Varotsos, N.V. Sarlis and E.S. Skordas, Practica of Athens Academy 76, 294-321, 2001; Physical Review E 66, 011902, 2002). In particular, it has been found that novel dynamical features hidden behind time series in complex systems can emerge upon analyzing them in this new time domain, which conforms to the desire to reduce uncertainty and extract signal information as much as possible. The analysis in natural time enables the study of the dynamical evolution of a complex system and identifies when the system enters a critical stage. Hence, natural time plays a key role in predicting impending catastrophic events in general. Relevant examples of data analysis in this new time domain have been published during the last decade in a large variety of fields, e.g., Earth Sciences, Biology and Physics. The book explains in detail a series of such examples including the identification of the sudden cardiac death risk in Cardiology, the recognition of electric signals that precede earthquakes, the determination of the time of an impending major mainshock in Seismology, and the analysis of the avalanches of the penetration of magnetic flux into thin films of type II superconductors in Condensed Matter Physics. In general, this book is concerned with the time-series analysis of signals emitted from complex systems by means of the new time domain and provides advanced students and research workers in diverse fields with a sound grounding in the fundamentals of current research work on detecting (long-range) correlations in complex time series. Furthermore, the modern techniques of Statistical Physics in time series analysis, for example Hurst analysis, the detrended fluctuation analysis, the wavelet transform etc., are presented along with their advantages when natural time domain is employed.
The book addresses the problem of calculation of d-dimensional integrals (conditional expectations) in filter problems. It develops new methods of deterministic numerical integration, which can be used to speed up and stabilize filter algorithms. With the help of these methods, better estimates and predictions of latent variables are made possible in the fields of economics, engineering and physics. The resulting procedures are tested within four detailed simulation studies.
Praise for the Second Edition "Statistics for Research has other fine qualities besides superior organization. The examples and the statistical methods are laid out with unusual clarity by the simple device of using special formats for each. The book was written with great care and is extremely user-friendly."--The UMAP Journal Although the goals and procedures of statistical research have changed little since the Second Edition of Statistics for Research was published, the almost universal availability of personal computers and statistical computing application packages have made it possible for today's statisticians to do more in less time than ever before. The Third Edition of this bestselling text reflects how the changes in the computing environment have transformed the way statistical analyses are performed today. Based on extensive input from university statistics departments throughout the country, the authors have made several important and timely revisions, including: Additional material on probability appears early in the text New sections on odds ratios, ratio and difference estimations, repeated measure analysis, and logistic regression New examples and exercises, many from the field of the health sciences Printouts of computer analyses on all complex procedures An accompanying Web site illustrating how to use SAS(R) and JMP(R) for all procedures The text features the most commonly used statistical techniques for the analysis of research data. As in the earlier editions, emphasis is placed on how to select the proper statistical procedure and how to interpret results. Whenever possible, to avoid using the computer as a "black box" that performs a mysteriousprocess on the data, actual computational procedures are also given. A must for scientists who analyze data, professionals and researchers who need a self-teaching text, and graduate students in statistical methods, Statistics for Research, Third Edition brings the methodology up to date in a very practical and accessible way. |
You may like...
Personal Web Usage in the Workplace - A…
Murugan Anandarajan, Claire Simmers
Hardcover
R2,003
Discovery Miles 20 030
|