![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book collects peer-reviewed contributions on modern statistical methods and topics, stemming from the third workshop on Analytical Methods in Statistics, AMISTAT 2019, held in Liberec, Czech Republic, on September 16-19, 2019. Real-life problems demand statistical solutions, which in turn require new and profound mathematical methods. As such, the book is not only a collection of solved problems but also a source of new methods and their practical extensions. The authoritative contributions focus on analytical methods in statistics, asymptotics, estimation and Fisher information, robustness, stochastic models and inequalities, and other related fields; further, they address e.g. average autoregression quantiles, neural networks, weighted empirical minimum distance estimators, implied volatility surface estimation, the Grenander estimator, non-Gaussian component analysis, meta learning, and high-dimensional errors-in-variables models.
Harmonic analysis and probability have long enjoyed a mutually beneficial relationship that has been rich and fruitful. This monograph, aimed at researchers and students in these fields, explores several aspects of this relationship. The primary focus of the text is the nontangential maximal function and the area function of a harmonic function and their probabilistic analogues in martingale theory. The text first gives the requisite background material from harmonic analysis and discusses known results concerning the nontangential maximal function and area function, as well as the central and essential role these have played in the development of the field.The book next discusses further refinements of traditional results: among these are sharp good-lambda inequalities and laws of the iterated logarithm involving nontangential maximal functions and area functions. Many applications of these results are given. Throughout, the constant interplay between probability and harmonic analysis is emphasized and explained. The text contains some new and many recent results combined in a coherent presentation.
This book is concerned with important problems of robust (stable) statistical pat tern recognition when hypothetical model assumptions about experimental data are violated (disturbed). Pattern recognition theory is the field of applied mathematics in which prin ciples and methods are constructed for classification and identification of objects, phenomena, processes, situations, and signals, i. e., of objects that can be specified by a finite set of features, or properties characterizing the objects (Mathematical Encyclopedia (1984)). Two stages in development of the mathematical theory of pattern recognition may be observed. At the first stage, until the middle of the 1970s, pattern recogni tion theory was replenished mainly from adjacent mathematical disciplines: mathe matical statistics, functional analysis, discrete mathematics, and information theory. This development stage is characterized by successful solution of pattern recognition problems of different physical nature, but of the simplest form in the sense of used mathematical models. One of the main approaches to solve pattern recognition problems is the statisti cal approach, which uses stochastic models of feature variables. Under the statistical approach, the first stage of pattern recognition theory development is characterized by the assumption that the probability data model is known exactly or it is esti mated from a representative sample of large size with negligible estimation errors (Das Gupta, 1973, 1977), (Rey, 1978), (Vasiljev, 1983))."
This volume has been created in honor of the seventieth birthday of Ted Harris, which was celebrated on January 11th, 1989. The papers rep resent the wide range of subfields of probability theory in which Ted has made profound and fundamental contributions. This breadth in Ted's research complicates the task of putting together in his honor a book with a unified theme. One common thread noted was the spatial, or geometric, aspect of the phenomena Ted investigated. This volume has been organized around that theme, with papers covering four major subject areas of Ted's research: branching processes, percola tion, interacting particle systems, and stochastic flows. These four topics do not* exhaust his research interests; his major work on Markov chains is commemorated in the standard technology "Harris chain" and "Harris recurrent" . The editors would like to take this opportunity to thank the speakers at the symposium and the contributors to this volume. Their enthusi astic support is a tribute to Ted Harris. We would like to express our appreciation to Annette Mosley for her efforts in typing the manuscripts and to Arthur Ogawa for typesetting the volume. Finally, we gratefully acknowledge the National Science Foundation and the University of South ern California for their financial support.
Ordered Random Variables have attracted several authors. The basic building block of Ordered Random Variables is Order Statistics which has several applications in extreme value theory and ordered estimation. The general model for ordered random variables, known as Generalized Order Statistics has been introduced relatively recently by Kamps (1995).
This book is the result of the International Symposium on Semi Markov Processes and their Applications held on June 4-7, 1984 at the Universite Libre de Bruxelles with the help of the FNRS (Fonds National de la Recherche Scientifique, Belgium), the Ministere de l'Education Nationale (Belgium) and the Bernoulli Society for Mathe matical Statistics and Probability. This international meeting was planned to make a state of the art for the area of semi-Markov theory and its applications, to bring together researchers in this field and to create a platform for open and thorough discussion. Main themes of the Symposium are the first ten sections of this book. The last section presented here gives an exhaustive biblio graphy on semi-Markov processes for the last ten years. Papers selected for this book are all invited papers and in addition some contributed papers retained after strong refereeing. Sections are I. Markov additive processes and regenerative systems II. Semi-Markov decision processes III. Algorithmic and computer-oriented approach IV. Semi-Markov models in economy and insurance V. Semi-Markov processes and reliability theory VI. Simulation and statistics for semi-Markov processes VII. Semi-Markov processes and queueing theory VIII. Branching IX. Applications in medicine X. Applications in other fields v PREFACE XI. A second bibliography on semi-Markov processes It is interesting to quote that sections IV to X represent a good sample of the main applications of semi-Markov processes i. e."
Sample data alone never suffice to draw conclusions about populations. Inference always requires assumptions about the population and sampling process. Statistical theory has revealed much about how strength of assumptions affects the precision of point estimates, but has had much less to say about how it affects the identification of population parameters. Indeed, it has been commonplace to think of identification as a binary event - a parameter is either identified or not - and to view point identification as a pre-condition for inference. Yet there is enormous scope for fruitful inference using data and assumptions that partially identify population parameters. This book explains why and shows how. The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. One focus is prediction with missing outcome or covariate data. Another is decomposition of finite mixtures, with application to the analysis of contaminated sampling and ecological inference. A third major focus is the analysis of treatment response. Whatever the particular subject under study, the presentation follows a common path. The author first specifies the sampling process generating the available data and asks what may be learned about population parameters using the empirical evidence alone. He then ask how the (typically) setvalued identification regions for these parameters shrink if various assumptions are imposed. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. Conservative nonparametric analysis enables researchers to learn from the available data without imposing untenable assumptions. It enables establishment of a domain of consensus among researchers who may hold disparate beliefs about what assumptions are appropriate. Charles F. Manski is Board of Trustees Professor at Northwestern University. He is author of Identification Problems in the Social Sciences and Analog Estimation Methods in Econometrics. He is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Econometric Society.
This book covers the latest results in the field of risk analysis. Presented topics include probabilistic models in cancer research, models and methods in longevity, epidemiology of cancer risk, engineering reliability and economical risk problems. The contributions of this volume originate from the 5th International Conference on Risk Analysis (ICRA 5). The conference brought together researchers and practitioners working in the field of risk analysis in order to present new theoretical and computational methods with applications in biology, environmental sciences, public health, economics and finance.
V-INVEX FUNCTIONS AND VECTOR OPTIMIZATION summarizes and synthesizes an aspect of research work that has been done in the area of Generalized Convexity over the past several decades. Specifically, the book focuses on V-invex functions in vector optimization that have grown out of the work of Jeyakumar and Mond in the 1990?s. V-invex functions are areas in which there has been much interest because it allows researchers and practitioners to address and provide better solutions to problems that are nonlinear, multi-objective, fractional, and continuous in nature. Hence, V-invex functions have permitted work on a whole new class of vector optimization applications. There has been considerable work on vector optimization by some highly distinguished researchers including Kuhn, Tucker, Geoffrion, Mangasarian, Von Neuman, Schaiible, Ziemba, etc. The authors have integrated this related research into their book and demonstrate the wide context from which the area has grown and continues to grow. The result is a well-synthesized, accessible, and usable treatment for students, researchers, and practitioners in the areas of OR, optimization, applied mathematics, engineering, and their work relating to a wide range of problems which include financial institutions, logistics, transportation, traffic management, etc.
The concept of conditional specification of distributions is not new but, except in normal families, it has not been well developed in the literature. Computational difficulties undoubtedly hindered or discouraged developments in this direction. However, such roadblocks are of dimished importance today. Questions of compatibility of conditional and marginal specifications of distributions are of fundamental importance in modeling scenarios. Models with conditionals in exponential families are particularly tractable and provide useful models in a broad variety of settings.
"Et moi, ... si j'avait su comment en revenir. One service mathematics has rendered the je n'y serais poin t aile.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'. able to do something with it. Eric T. Bell O. H ea viside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non Iinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service. topology has rendered mathematical physics .. .' 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d 'e1: re of this series."
The first edition was released in 1996 and has sold close to 2200 copies. Provides an up-to-date comprehensive treatment of MDS, a statistical technique used to analyze the structure of similarity or dissimilarity data in multidimensional space. The authors have added three chapters and exercise sets. The text is being moved from SSS to SSPP. The book is suitable for courses in statistics for the social or managerial sciences as well as for advanced courses on MDS. All the mathematics required for more advanced topics is developed systematically in the text.
Discrete event simulation and agent-based modeling are increasingly recognized as critical for diagnosing and solving process issues in complex systems. Introduction to Discrete Event Simulation and Agent-based Modeling covers the techniques needed for success in all phases of simulation projects. These include: * Definition - The reader will learn how to plan a project and communicate using a charter. * Input analysis - The reader will discover how to determine defensible sample sizes for all needed data collections. They will also learn how to fit distributions to that data. * Simulation - The reader will understand how simulation controllers work, the Monte Carlo (MC) theory behind them, modern verification and validation, and ways to speed up simulation using variation reduction techniques and other methods. * Output analysis - The reader will be able to establish simultaneous intervals on key responses and apply selection and ranking, design of experiments (DOE), and black box optimization to develop defensible improvement recommendations. * Decision support - Methods to inspire creative alternatives are presented, including lean production. Also, over one hundred solved problems are provided and two full case studies, including one on voting machines that received international attention. Introduction to Discrete Event Simulation and Agent-based Modeling demonstrates how simulation can facilitate improvements on the job and in local communities. It allows readers to competently apply technology considered key in many industries and branches of government. It is suitable for undergraduate and graduate students, as well as researchers and other professionals.
This BASS book Series publishes selected high-quality papers reflecting recent advances in the design and biostatistical analysis of biopharmaceutical experiments - particularly biopharmaceutical clinical trials. The papers were selected from invited presentations at the Biopharmaceutical Applied Statistics Symposium (BASS), which was founded by the first Editor in 1994 and has since become the premier international conference in biopharmaceutical statistics. The primary aims of the BASS are: 1) to raise funding to support graduate students in biostatistics programs, and 2) to provide an opportunity for professionals engaged in pharmaceutical drug research and development to share insights into solving the problems they encounter. The BASS book series is initially divided into three volumes addressing: 1) Design of Clinical Trials; 2) Biostatistical Analysis of Clinical Trials; and 3) Pharmaceutical Applications. This book is the third of the 3-volume book series. The topics covered include: Targeted Learning of Optimal Individualized Treatment Rules under Cost Constraints, Uses of Mixture Normal Distribution in Genomics and Otherwise, Personalized Medicine - Design Considerations, Adaptive Biomarker Subpopulation and Tumor Type Selection in Phase III Oncology Trials, High Dimensional Data in Genomics; Synergy or Additivity - The Importance of Defining the Primary Endpoint, Full Bayesian Adaptive Dose Finding Using Toxicity Probability Interval (TPI), Alpha-recycling for the Analyses of Primary and Secondary Endpoints of Clinical Trials, Expanded Interpretations of Results of Carcinogenicity Studies of Pharmaceuticals, Randomized Clinical Trials for Orphan Drug Development, Mediation Modeling in Randomized Trials with Non-normal Outcome Variables, Statistical Considerations in Using Images in Clinical Trials, Interesting Applications over 30 Years of Consulting, Uncovering Fraud, Misconduct and Other Data Quality Issues in Clinical Trials, Development and Evaluation of High Dimensional Prognostic Models, and Design and Analysis of Biosimilar Studies.
Kiyosi Ito, the founder of stochastic calculus, is one of the few central figures of the twentieth century mathematics who reshaped the mathematical world. Today stochastic calculus is a central research field with applications in several other mathematical disciplines, for example physics, engineering, biology, economics and finance. The Abel Symposium 2005 was organized as a tribute to the work of Kiyosi Ito on the occasion of his 90th birthday. Distinguished researchers from all over the world were invited to present the newest developments within the exciting and fast growing field of stochastic analysis. The present volume combines both papers from the invited speakers and contributions by the presenting lecturers. A special feature is the Memoirs that Kiyoshi Ito wrote for this occasion. These are valuable pages for both young and established researchers in the field.
This book offers a straightforward introduction to the mathematical theory of probability. It presents the central results and techniques of the subject in a complete and self-contained account. As a result, the emphasis is on giving results in simple forms with clear proofs and to eschew more powerful forms of theorems which require technically involved proofs. Throughout there are a wide variety of exercises to illustrate and to develop ideas in the text.
This volume contains a selection of invited and contributed papers presented at the International Conference on Linear Statistical Inference LINSTAT '93, held in Poznan, Poland, from May 31 to June 4, 1993. Topics treated include estimation, prediction and testing in linear models, robustness of relevant statistical methods, estimation of variance components appearing in linear models, generalizations to nonlinear models, design and analysis of experiments, including optimality and comparison of linear experiments. This text should be of interest to mathematical statisticians, applied statisticians, biometricians, biostatisticians, and econometrists.
This book is in two volumes, and is intended as a text for introductory courses in probability and statistics at the second or third year university level. It em phasizes applications and logical principles rather than mathematical theory. A good background in freshman calculus is sufficient for most of the material presented. Several starred sections have been included as supplementary material. Nearly 900 problems and exercises of varying difficulty are given, and Appendix A contains answers to about one-third of them. The first volume (Chapters 1-8) deals with probability models and with math ematical methods for describing and manipulating them. It is similar in content and organization to the 1979 edition. Some sections have been rewritten and expanded-for example, the discussions of independent random variables and conditional probability. Many new exercises have been added. In the second volume (Chapters 9-16), probability models are used as the basis for the analysis and interpretation of data. This material has been revised extensively. Chapters 9 and 10 describe the use of the likelihood function in estimation problems, as in the 1979 edition. Chapter 11 then discusses frequency properties of estimation procedures, and introduces coverage probability and confidence intervals. Chapter 12 describes tests of significance, with applications primarily to frequency data. The likelihood ratio statistic is used to unify the material on testing, and connect it with earlier material on estimation."
The past several years have seen the creation and extension of a very conclusive theory of statistics and probability. Many of the research workers who have been concerned with both probability and statistics felt the need for meetings that provide an opportunity for personal con tacts among scholars whose fields of specialization cover broad spectra in both statistics and probability: to discuss major open problems and new solutions, and to provide encouragement for further research through the lectures of carefully selected scholars, moreover to introduce to younger colleagues the latest research techniques and thus to stimulate their interest in research. To meet these goals, the series of Pannonian Symposia on Mathematical Statistics was organized, beginning in the year 1979: the first, second and fourth one in Bad Tatzmannsdorf, Burgenland, Austria, the third and fifth in Visegrad, Hungary. The Sixth Pannonian Symposium was held in Bad Tatzmannsdorf again, in the time between 14 and 20 September 1986, under the auspices of Dr. Heinz FISCHER, Federal Minister of Science and Research, Theodor KERY, President of the State Government of Burgenland, Dr. Franz SAUERZOPF, Vice-President of the State Govern ment of Burgenland and Dr. Josef SCHMIDL, President of the Austrian Sta tistical Central Office. The members of the Honorary Committee were Pal ERDOS, WXadisXaw ORLICZ, Pal REVESz, Leopold SCHMETTERER and Istvan VINCZE; those of the Organizing Committee were Wilfried GROSSMANN (Uni versity of Vienna), Franz KONECNY (University of Agriculture of Vienna) and, as the chairman, Wolfgang WERTZ (Technical University of Vienna)."
This volume has its origin in the third *Workshop on Maximum-Entropy and Bayesian Methods in Applied Statistics,* held at the University of Wyoming, August 1 to 4, 1983. It was anticipated that the proceedings of this workshop could not be prepared in a timely fashion, so most of the papers were not collected until a year or so ago. Because most of the papers are in the nature of advancing theory or solving specific problems, as opposed to status reports, it is believed that the contents of this volume will be of lasting interest to the Bayesian community. The workshop was organized to bring together researchers from differ ent fields to examine critically maximum-entropy and Bayesian methods in science, engineering, medicine, economics, and other disciplines. Some of the papers were chosen specifically to kindle interest in new areas that may offer new tools or insight to the reader or to stimulate work on pressing problems that appear to be ideally suited to the maximum-entropy or Bayes ian method.
This selection of reviews and papers is intended to stimulate renewed reflection on the fundamental and practical aspects of probability in physics. While putting emphasis on conceptual aspects in the foundations of statistical and quantum mechanics, the book deals with the philosophy of probability in its interrelation with mathematics and physics in general. Addressing graduate students and researchers in physics and mathematics together with philosophers of science, the contributions avoid cumbersome technicalities in order to make the book worthwhile reading for nonspecialists and specialists alike.
This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.
This book is a collection of topical survey articles by leading researchers in the fields of applied analysis and probability theory, working on the mathematical description of growth phenomena. Particular emphasis is on the interplay of the two fields, with articles by analysts being accessible for researchers in probability, and vice versa. Mathematical methods discussed in the book comprise large deviation theory, lace expansion, harmonic multi-scale techniques and homogenisation of partial differential equations. Models based on the physics of individual particles are discussed alongside models based on the continuum description of large collections of particles, and the mathematical theories are used to describe physical phenomena such as droplet formation, Bose-Einstein condensation, Anderson localization, Ostwald ripening, or the formation of the early universe. The combination of articles from the two fields of analysis and probability is highly unusual and makes this book an important resource for researchers working in all areas close to the interface of these fields. |
![]() ![]() You may like...
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
![]() R1,305 Discovery Miles 13 050
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,636
Discovery Miles 26 360
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
The Oxford Handbook of Functional Data…
Frederic Ferraty, Yves Romain
Hardcover
R4,606
Discovery Miles 46 060
Probability - An Introduction
Geoffrey Grimmett, Dominic Welsh
Hardcover
R4,206
Discovery Miles 42 060
|