![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book considers some models described by means of partial dif ferential equations and boundary conditions with chaotic stochastic disturbance. In a framework of stochastic Partial Differential Equa tions an approach is suggested to generalize solutions of stochastic Boundary Problems. The main topic concerns probabilistic aspects with applications to well-known Random Fields models which are representative for the corresponding stochastic Sobolev spaces. {The term "stochastic" in general indicates involvement of appropriate random elements. ) It assumes certain knowledge in general Analysis and Probability {Hilbert space methods, Schwartz distributions, Fourier transform) . I A very general description of the main problems considered can be given as follows. Suppose, we are considering a random field ~ in a region T ~ Rd which is associated with a chaotic (stochastic) source"' by means of the differential equation (*) in T. A typical chaotic source can be represented by an appropri ate random field"' with independent values, i. e. , generalized random function"' = ( cp, 'TJ), cp E C~(T), with independent random variables ( cp, 'fJ) for any test functions cp with disjoint supports. The property of having independent values implies a certain "roughness" of the ran dom field "' which can only be treated functionally as a very irregular Schwarz distribution. With the lack of a proper development of non linear analyses for generalized functions, let us limit ourselves to the 1 For related material see, for example, J. L. Lions, E.
Since the groundbreaking research of Harry Markowitz into the application of operations research to the optimization of investment portfolios, finance has been one of the most important areas of application of operations research. The use of hidden Markov models (HMMs) has become one of the hottest areas of research for such applications to finance. This handbook offers systemic applications of different methodologies that have been used for decision making solutions to the financial problems of global markets. As the follow-up to the authors' Hidden Markov Models in Finance (2007), this offers the latest research developments and applications of HMMs to finance and other related fields. Amongst the fields of quantitative finance and actuarial science that will be covered are: interest rate theory, fixed-income instruments, currency market, annuity and insurance policies with option-embedded features, investment strategies, commodity markets, energy, high-frequency trading, credit risk, numerical algorithms, financial econometrics and operational risk. Hidden Markov Models in Finance: Further Developments and Applications, Volume II presents recent applications and case studies in finance and showcases the formulation of emerging potential applications of new research over the book's 11 chapters. This will benefit not only researchers in financial modeling, but also others in fields such as engineering, the physical sciences and social sciences. Ultimately the handbook should prove to be a valuable resource to dynamic researchers interested in taking full advantage of the power and versatility of HMMs in accurately and efficiently capturing many of the processes in the financial market.
This book includes best selected, high-quality research papers presented at International Conference on Data Driven Computing and IoT (DDCIoT 2021) organized jointly by Geetanjali Institute of Technical Studies (GITS), Udaipur, and Rajasthan Technical University, Kota, India, during March 20-21, 2021. This book presents influential ideas and systems in the field of data driven computing, information technology, and intelligent systems.
Lagrangian expansions can be used to obtain numerous very useful probability models, which have been applied to real life situations including, but not limited to branching processes, queuing processes, stochastic processes, environmental toxicology, diffusion of information, ecology, strikes in industries, sales of new products, and amount of production for optimum profits. This book is a comprehensive, systematic treatment of the two classes of Lagrangian probability distributions along with some of their sub-families and their properties; important applications are also given.Graduate students and researchers interested in Lagrangian probability distributions, who have sound knowledge of standard statistical techniques, will find this book valuable. It may be used as a reference text or in courses and seminars on distribution theory and Lagrangian distributions. Applied scientists and researchers in environmental statistics, reliability, sales management, epidemiology, operations research, and the optimization of profits in manufacturing and marketing will benefit immensely from the various applications in the book.
This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.
Probability theory on compact Lie groups deals with the interaction between chance and symmetry, a beautiful area of mathematics of great interest in its own sake but which is now also finding increasing applications in statistics and engineering (particularly with respect to signal processing). The author gives a comprehensive introduction to some of the principle areas of study, with an emphasis on applicability. The most important topics presented are: the study of measures via the non-commutative Fourier transform, existence and regularity of densities, properties of random walks and convolution semigroups of measures and the statistical problem of deconvolution. The emphasis on compact (rather than general) Lie groups helps readers to get acquainted with what is widely seen as a difficult field but which is also justified by the wealth of interesting results at this level and the importance of these groups for applications. The book is primarily aimed at researchers working in probability, stochastic analysis and harmonic analysis on groups. It will also be of interest to mathematicians working in Lie theory and physicists, statisticians and engineers who are working on related applications. A background in first year graduate level measure theoretic probability and functional analysis is essential; a background in Lie groups and representation theory is certainly helpful but the first two chapters also offer orientation in these subjects."
Stochastic geometry deals with models for random geometric structures. Its early beginnings are found in playful geometric probability questions, and it has vigorously developed during recent decades, when an increasing number of real-world applications in various sciences required solid mathematical foundations. Integral geometry studies geometric mean values with respect to invariant measures and is, therefore, the appropriate tool for the investigation of random geometric structures that exhibit invariance under translations or motions. Stochastic and Integral Geometry provides the mathematically oriented reader with a rigorous and detailed introduction to the basic stationary models used in stochastic geometry random sets, point processes, random mosaics and to the integral geometry that is needed for their investigation. The interplay between both disciplines is demonstrated by various fundamental results. A chapter on selected problems about geometric probabilities and an outlook to non-stationary models are included, and much additional information is given in the section notes."
This book presents the statistical aspects of designing, analyzing and interpreting the results of genome-wide association scans (GWAS studies) for genetic causes of disease using unrelated subjects. Particular detail is given to the practical aspects of employing the bioinformatics and data handling methods necessary to prepare data for statistical analysis. The goal in writing this book is to give statisticians, epidemiologists, and students in these fields the tools to design a powerful genome-wide study based on current technology. The other part of this is showing readers how to conduct analysis of the created study. Design and Analysis of Genome-Wide Association Studies provides a compendium of well-established statistical methods based upon single SNP associations. It also provides an introduction to more advanced statistical methods and issues. Knowing that technology, for instance large scale SNP arrays, is quickly changing, this text has significant lessons for future use with sequencing data. Emphasis on statistical concepts that apply to the problem of finding disease associations irrespective of the technology ensures its future applications. The author includes current bioinformatics tools while outlining the tools that will be required for use with extensive databases from future large scale sequencing projects. The author includes current bioinformatics tools while outlining additional issues and needs arising from the extensive databases from future large scale sequencing projects.
During the 1980s, the use of log-linear statistical models in behavioral and life-science inquiry increased markedly. Concurrently, log-linear theory, developed largely during the previous decade, has been streamlined and refined. An aim of this second edition is to acquaint old and new readers with these refinements. The most significant change that has occurred is the increased availability of user-oriented computer programs for the performance of log-linear analyses. During this period, all major statistical packages (i.e., BMDP, SAS, and SPSS) introduced either new or improved computer programs designed specifically for the specification and fitting of log-linear models. Consequently, the enhanced ability of practicing researchers to perform log-linear analyses has been accompanied by an enhanced need for didactic explanations of this system of analysis--for explanations of log-linear theory and method that can be readily understood by practitioners and graduate students who do not possess recondite backgrounds in mathematical statistics, yet who desire to obtain a level of understanding beyond that which is typically offered by cookbook approaches to statistical topics. Another aim of this second edition is to fulfill this need. As before, this edition has been prepared for readers who have had at least one intermediate-level course in applied statistics in which the basic principles of factorial analysis of variance and multiple regression were discussed. Also as before, to assist readers with modest preparation in the analysis of quantitative/categorical data, this edition will review topics in such relevant areas as basic probability theory, traditional chi-square goodness-of-fit procedures, and the method of maximum-likelihood estimation. Readers with strong backgrounds in statistics can skim over these preparatory discussions, contained largely in Chapters 2 and 3, without prejudice.
This contributed volume comprises research articles and reviews on topics connected to the mathematical modeling of cellular systems. These contributions cover signaling pathways, stochastic effects, cell motility and mechanics, pattern formation processes, as well as multi-scale approaches. All authors attended the workshop on "Modeling Cellular Systems" which took place in Heidelberg in October 2014. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.
The book addresses the problem of calculation of d-dimensional integrals (conditional expectations) in filter problems. It develops new methods of deterministic numerical integration, which can be used to speed up and stabilize filter algorithms. With the help of these methods, better estimates and predictions of latent variables are made possible in the fields of economics, engineering and physics. The resulting procedures are tested within four detailed simulation studies.
The focus of this monograph is on generalizing the notion of variation in a set of numbers to variation in a set of probability distributions. The authors collect some known ways of comparing stochastic matrices in the context of information theory, statistics, economics, and population sciences. They then generalize these comparisons, introduce new comparisons, and establish the relations of implication or equivalence among sixteen of these comparisons. Some of the possible implications among these comparisons remain open questions. The results in this book establish a new field of investigation for both mathematicians and scientific users interested in the variations among multiple probability distributions. The work is divided into two parts. The first deals with finite stochastic matrices, which may be interpreted as collections of discrete probability distributions. The first part is presented in a fairly elementary mathematical setting. The introduction provides sketches of applications of concepts and methods to discrete memory-less channels in information theory, to the design and comparison of experiments in statistics, to the measurement of inequality in economics, and to various analytical problems in population genetics, ecology, and demography. Part two is more general and entails more difficult analysis involving Markov kernels. Here, many results of the first part are placed in a more general setting, as required in more sophisticated applications. A great strength of this text is the resulting connections among ideas from diverse fields: mathematics, statistics, economics, and population biology. In providing this array of new tools and concepts, the work will appeal to the practitioner. At the same time, it will serve as an excellent resource for self-study of for a graduate seminar course, as well as a stimulus to further research.
The question of what environmental statistics is about is particularly important when it comes to the formulation of relevant research and training, whether in academia, agencies, or industries. This volume aims to give a new perception on the subject with some examples that are of concern and interest today. Environmental statistics is in a take-off stage both for reasons of societal challenge and statistical opportunity, and is demanding more and more from non-traditional and innovative statistical approaches. The chapters in this volume, which are specially prepared by several outstanding professionals involved in statistics and the environment, discuss the current state of the art in diverse areas of environmental statistics. The volume provides new perspectives and problems for future research, training, policy and regulation. It will be valuable to researchers, teachers, consultants and graduate students in statistics, environmental statistics, statistical ecology, and quantitative environmental sciences in academia, industries, governmental agencies, laboratories and libraries.
This book provides a comprehensive summary of a wide variety of statistical methods for the analysis of repeated measurements. It is designed to be both a useful reference for practitioners and a textbook for a graduate-level course focused on methods for the analysis of repeated measurements. This book will be of interest to * Statisticians in academics, industry, and research organizations * Scientists who design and analyze studies in which repeated measurements are obtained from each experimental unit * Graduate students in statistics and biostatistics. The prerequisites are knowledge of mathematical statistics at the level of Hogg and Craig (1995) and a course in linear regression and ANOVA at the level of Neter et. al. (1985). The important features of this book include a comprehensive coverage of classical and recent methods for continuous and categorical outcome variables; numerous homework problems at the end of each chapter; and the extensive use of real data sets in examples and homework problems. The 80 data sets used in the examples and homework problems can be downloaded from www.springer-ny.com at the list of author websites. Since many of the data sets can be used to demonstrate multiple methods of analysis, instructors can easily develop additional homework problems and exam questions based on the data sets provided. In addition, overhead transparencies produced using TeX and solutions to homework problems are available to course instructors. The overheads also include programming statements and computer output for the examples, prepared primarily using the SAS System. Charles S. Davis is Senior Director of Biostatistics at Elan Pharmaceuticals, San Diego, California. He previously was professor in the Department of Biostatistics at the University of Iowa. He is author or co-author of more than 75 peer-reviewed papers in statistical and medical journals and one book (Categorical Data Analysis using the SAS System with Maura Stokes and Gary Koch). His research and teaching interests include categorical data analysis, methods for the analysis of repeated measurements, and clinical trials. Dr. Davis has consulted with numerous companies and has taught short courses on categorical data analysis, methods for the analysis of repeated measurements, and clinical trials methodology for industrial, government, and academic organizations. He received an "Excellence in Continuing Education" award from the American Statistical Association in 2001 and has served as associate editor of the journals Controlled Clinical Trials and The American Statistician and as chair of the Biometrics Section of the ASA.
In recent years, as part of the increasing "informationization" of industry and the economy, enterprises have been accumulating vast amounts of detailed data such as high-frequency transaction data in nancial markets and point-of-sale information onindividualitems in theretail sector. Similarly,vast amountsof data arenow ava- able on business networks based on inter rm transactions and shareholdings. In the past, these types of information were studied only by economists and management scholars. More recently, however, researchers from other elds, such as physics, mathematics, and information sciences, have become interested in this kind of data and, based on novel empirical approaches to searching for regularities and "laws" akin to those in the natural sciences, have produced intriguing results. This book is the proceedings of the international conference THICCAPFA7 that was titled "New Approaches to the Analysis of Large-Scale Business and E- nomic Data," held in Tokyo, March 1-5, 2009. The letters THIC denote the Tokyo Tech (Tokyo Institute of Technology)-Hitotsubashi Interdisciplinary Conference. The conference series, titled APFA (Applications of Physics in Financial Analysis), focuses on the analysis of large-scale economic data. It has traditionally brought physicists and economists together to exchange viewpoints and experience (APFA1 in Dublin 1999, APFA2 in Liege ` 2000, APFA3 in London 2001, APFA4 in Warsaw 2003, APFA5 in Torino 2006, and APFA6 in Lisbon 2007). The aim of the conf- ence is to establish fundamental analytical techniques and data collection methods, taking into account the results from a variety of academic disciplines.
This text is an Elementary Introduction to Stochastic Processes in discrete and continuous time with an initiation of the statistical inference. The material is standard and classical for a first course in Stochastic Processes at the senior/graduate level (lessons 1-12). To provide students with a view of statistics of stochastic processes, three lessons (13-15) were added. These lessons can be either optional or serve as an introduction to statistical inference with dependent observations. Several points of this text need to be elaborated, (1) The pedagogy is somewhat obvious. Since this text is designed for a one semester course, each lesson can be covered in one week or so. Having in mind a mixed audience of students from different departments (Math ematics, Statistics, Economics, Engineering, etc.) we have presented the material in each lesson in the most simple way, with emphasis on moti vation of concepts, aspects of applications and computational procedures. Basically, we try to explain to beginners questions such as "What is the topic in this lesson?" "Why this topic?," "How to study this topic math ematically?." The exercises at the end of each lesson will deepen the stu dents' understanding of the material, and test their ability to carry out basic computations. Exercises with an asterisk are optional (difficult) and might not be suitable for homework, but should provide food for thought."
This volume presents the proceedings of the 18th International Probabilistic Workshop (IPW), which was held in Guimaraes, Portugal in May 2021. Probabilistic methods are currently of crucial importance for research and developments in the field of engineering, which face challenges presented by new materials and technologies and rapidly changing societal needs and values. Contemporary needs related to, for example, performance-based design, service-life design, life-cycle analysis, product optimization, assessment of existing structures and structural robustness give rise to new developments as well as accurate and practically applicable probabilistic and statistical engineering methods to support these developments. These proceedings are a valuable resource for anyone interested in contemporary developments in the field of probabilistic engineering applications.
Strategies for Quasi-Monte Carlo builds a framework to design and analyze strategies for randomized quasi-Monte Carlo (RQMC). One key to efficient simulation using RQMC is to structure problems to reveal a small set of important variables, their number being the effective dimension, while the other variables collectively are relatively insignificant. Another is smoothing. The book provides many illustrations of both keys, in particular for problems involving Poisson processes or Gaussian processes. RQMC beats grids by a huge margin. With low effective dimension, RQMC is an order-of-magnitude more efficient than standard Monte Carlo. With, in addition, certain smoothness - perhaps induced - RQMC is an order-of-magnitude more efficient than deterministic QMC. Unlike the latter, RQMC permits error estimation via the central limit theorem. For random-dimensional problems, such as occur with discrete-event simulation, RQMC gets judiciously combined with standard Monte Carlo to keep memory requirements bounded. This monograph has been designed to appeal to a diverse audience, including those with applications in queueing, operations research, computational finance, mathematical programming, partial differential equations (both deterministic and stochastic), and particle transport, as well as to probabilists and statisticians wanting to know how to apply effectively a powerful tool, and to those interested in numerical integration or optimization in their own right. It recognizes that the heart of practical application is algorithms, so pseudocodes appear throughout the book. While not primarily a textbook, it is suitable as a supplementary text for certain graduate courses. As a reference, it belongs on the shelf of everyone with a serious interest in improving simulation efficiency. Moreover, it will be a valuable reference to all those individuals interested in improving simulation efficiency with more than incremental increases.
This book deals with the impact of uncertainty in input data on the
outputs of mathematical models. Uncertain inputs as scalars,
tensors, functions, or domain boundaries are considered. In
practical terms, material parameters or constitutive laws, for
instance, are uncertain, and quantities as local temperature, local
mechanical stress, or local displacement are monitored. The goal of
the worst scenario method is to extremize the quantity over the set
of uncertain input data.
Statistics is strongly tied to applications in different scientific disciplines, and the most challenging statistical problems arise from problems in the sciences. In fact, the most innovative statistical research flows from the needs of applications in diverse settings. This volume is a testimony to the crucial role that statistics plays in scientific disciplines such as genetics and environmental sciences, among others. The articles in this volume range from human and agricultural genetic DNA research to carcinogens and chemical concentrations in the environment and to space debris and atmospheric chemistry. Also included are some articles on statistical methods which are sufficiently general and flexible to be applied to many practical situations. The papers were refereed by a panel of experts and the editors of the volume. The contributions are based on the talks presented at the Workshop on Statistics and the Sciences, held at the Centro Stefano Franscini in Ascona, Switzerland, during the week of May 23 to 28, 1999. The meeting was jointly organized by the Swiss Federal Institutes of Technology in Lausanne and Zurich, with the financial support of the Minerva Research Foundation. As the presentations at the workshop helped the participants recognize the po tential role that statistics can play in the sciences, we hope that this volume will help the reader to focus on the central role of statistics in the specific areas presented here and to extrapolate the results to further applications."
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
Copulas are functions that join multivariate distribution functions to their one-dimensional margins. The study of copulas and their role in statistics is a new but vigorously growing field. In this book the student or practitioner of statistics and probability will find discussions of the fundamental properties of copulas and some of their primary applications. The applications include the study of dependence and measures of association, and the construction of families of bivariate distributions. With nearly a hundred examples and over 150 exercises, this book is suitable as a text or for self-study. The only prerequisite is an upper level undergraduate course in probability and mathematical statistics, although some familiarity with nonparametric statistics would be useful. Knowledge of measure-theoretic probability is not required. Roger B. Nelsen is Professor of Mathematics at Lewis & Clark College in Portland, Oregon. He is also the author of "Proofs Without Words: Exercises in Visual Thinking," published by the Mathematical Association of America.
This text takes readers in a clear and progressive format from simple to recent and advanced topics in pure and applied probability such as contraction and annealed properties of non-linear semi-groups, functional entropy inequalities, empirical process convergence, increasing propagations of chaos, central limit, and Berry Esseen type theorems as well as large deviation principles for strong topologies on path-distribution spaces. Topics also include a body of powerful branching and interacting particle methods.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach. |
![]() ![]() You may like...
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,813
Discovery Miles 58 130
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,611
Discovery Miles 66 110
Stochastic Processes and Their…
Christo Ananth, N. Anbazhagan, …
Hardcover
R7,253
Discovery Miles 72 530
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,551
Discovery Miles 25 510
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,742
Discovery Miles 27 420
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|