![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book provides an up-to-date review of the general principles of and techniques for confirmatory adaptive designs. Confirmatory adaptive designs are a generalization of group sequential designs. With these designs, interim analyses are performed in order to stop the trial prematurely under control of the Type I error rate. In adaptive designs, it is also permissible to perform a data-driven change of relevant aspects of the study design at interim stages. This includes, for example, a sample-size reassessment, a treatment-arm selection or a selection of a pre-specified sub-population. Essentially, this adaptive methodology was introduced in the 1990s. Since then, it has become popular and the object of intense discussion and still represents a rapidly growing field of statistical research. This book describes adaptive design methodology at an elementary level, while also considering designing and planning issues as well as methods for analyzing an adaptively planned trial. This includes estimation methods and methods for the determination of an overall p-value. Part I of the book provides the group sequential methods that are necessary for understanding and applying the adaptive design methodology supplied in Parts II and III of the book. The book contains many examples that illustrate use of the methods for practical application. The book is primarily written for applied statisticians from academia and industry who are interested in confirmatory adaptive designs. It is assumed that readers are familiar with the basic principles of descriptive statistics, parameter estimation and statistical testing. This book will also be suitable for an advanced statistical course for applied statisticians or clinicians with a sound statistical background.
"Stochastic Tools in Mathematics and Science" covers basic stochastic tools used in physics, chemistry, engineering and the life sciences. The topics covered include conditional expectations, stochastic processes, Brownian motion and its relation to partial differential equations, Langevin equations, the Liouville and Fokker-Planck equations, as well as Markov chain Monte Carlo algorithms, renormalization, basic statistical mechanics, and generalized Langevin equations and the Mori-Zwanzig formalism. The applications include sampling algorithms, data assimilation, prediction from partial data, spectral analysis, and turbulence. The book is based on lecture notes from a class that has attracted graduate and advanced undergraduate students from mathematics and from many other science departments at the University of California, Berkeley. Each chapter is followed by exercises. The book will be useful for scientists and engineers working in a wide range of fields and applications. For this new edition the material has been thoroughly reorganized and updated, and new sections on scaling, sampling, filtering and data assimilation, based on recent research, have been added. There are additional figures and exercises. Review of earlier edition: "This is an excellent concise textbook which can be used for self-study by graduate and advanced undergraduate students and as a recommended textbook for an introductory course on probabilistic tools in science." Mathematical Reviews, 2006
This book describes methods for designing and analyzing experiments that are conducted using a computer code, a computer experiment, and, when possible, a physical experiment. Computer experiments continue to increase in popularity as surrogates for and adjuncts to physical experiments. Since the publication of the first edition, there have been many methodological advances and software developments to implement these new methodologies. The computer experiments literature has emphasized the construction of algorithms for various data analysis tasks (design construction, prediction, sensitivity analysis, calibration among others), and the development of web-based repositories of designs for immediate application. While it is written at a level that is accessible to readers with Masters-level training in Statistics, the book is written in sufficient detail to be useful for practitioners and researchers. New to this revised and expanded edition: * An expanded presentation of basic material on computer experiments and Gaussian processes with additional simulations and examples * A new comparison of plug-in prediction methodologies for real-valued simulator output * An enlarged discussion of space-filling designs including Latin Hypercube designs (LHDs), near-orthogonal designs, and nonrectangular regions * A chapter length description of process-based designs for optimization, to improve good overall fit, quantile estimation, and Pareto optimization * A new chapter describing graphical and numerical sensitivity analysis tools * Substantial new material on calibration-based prediction and inference for calibration parameters * Lists of software that can be used to fit models discussed in the book to aid practitioners
Contributions in this volume focus on computationally efficient algorithms and rigorous mathematical theories for analyzing large-scale networks. Researchers and students in mathematics, economics, statistics, computer science and engineering will find this collection a valuable resource filled with the latest research in network analysis. Computational aspects and applications of large-scale networks in market models, neural networks, social networks, power transmission grids, maximum clique problem, telecommunication networks, and complexity graphs are included with new tools for efficient network analysis of large-scale networks. This proceeding is a result of the 7th International Conference in Network Analysis, held at the Higher School of Economics, Nizhny Novgorod in June 2017. The conference brought together scientists, engineers, and researchers from academia, industry, and government.
The application of statistical methods to physics is essential. This unique book on statistical physics offers an advanced approach with numerous applications to the modern problems students are confronted with. Therefore the text contains more concepts and methods in statistics than the student would need for statistical mechanics alone. Methods from mathematical statistics and stochastics for the analysis of data are discussed as well. The book is divided into two parts, focusing first on the modeling of statistical systems and then on the analysis of these systems. Problems with hints for solution help the students to deepen their knowledge. The third edition has been updated and enlarged with new sections deepening the knowledge about data analysis. Moreover, a customized set of problems with solutions is accessible on the Web at extras.springer.com."
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors' website.
The Bible's grand narrative about Israel's Exodus from Egypt is central to Biblical religion, Jewish, Christian, and Muslim identity and the formation of the academic disciplines studying the ancient Near East. It has also been a pervasive theme in artistic and popular imagination.Israel's Exodus in Transdisciplinary Perspectiveis a pioneering worksurveying this tradition in unprecedented breadth, combiningarchaeological discovery, quantitative methodology and close literary reading. Archaeologists, Egyptologists, Biblical Scholars, Computer Scientists, Geoscientists and other experts contribute their diverse approaches in a novel, transdisciplinary consideration of ancient topography, Egyptian and Near Eastern parallels to the Exodus story, the historicity of the Exodus, the interface of the Exodus question with archaeological fieldwork on emergent Israel, the formation of biblical literature, and the cultural memory of the Exodus in ancient Israel and beyond. This edited volume contains research presented at the groundbreaking symposium"Out of Egypt: Israel s Exodus Between Text and Memory, History and Imagination"""held in 2013at the Qualcomm Institute of the University of California, San Diego. The combination of 44 contributions by an international group of scholars from diverse disciplines makes this the first such transdisciplinary study of ancient text and history. In the original conference and with this new volume, revolutionary media, such as a 3D immersive virtual reality environment, impart innovative, Exodus-based research to a wider audience. Out of archaeology, ancient texts, science and technology emergean up-to-date picture of the Exodus for the21stCentury and a new standard for collaborative research."
Recent years have seen significant advances in the use of risk analysis in many government agencies and private corporations. These advances are reflected both in the state of practice of risk analysis, and in the status of governmental requirements and industry standards. Because current risk and reliability models are often used to regulatory decisions, it is critical that inference methods used in these models be robust and technically sound. The goal of Bayesian Inference for Probabilistic Risk Assessment is to provide a Bayesian foundation for framing probabilistic problems and performing inference on these problems. It is aimed at scientists and engineers who perform or review risk analyses and it provides an analytical structure for combining data and information from various sources to generate estimates of the parameters of uncertainty distributions used in risk and reliability models. Inference in the book employs a modern computational approach known as Markov chain Monte Carlo (MCMC). MCMC methods were described in the early 1950s in research into Monte Carlo sampling at Los Alamos. Recently, with the advance of computing power and improved analysis algorithms, MCMC is increasingly being used for a wide range of Bayesian inference problems in a variety of disciplines. MCMC is effectively (although not literally) numerical (Monte Carlo) integration by way of Markov chains. Inference is performed by sampling from a target distribution (i.e., a specially constructed Markov chain, based upon the inference problem) until convergence (to the posterior distribution) is achieved. The MCMC approach may be implemented using custom-written routines or existing general purpose commercial or open-source software. This book uses an open-source program called OpenBUGS (commonly referred to as WinBUGS) to solve the inference problems that are described. A powerful feature of OpenBUGS is its automatic selection of an appropriate MCMC sampling scheme for a given problem. The approach that is taken in this book is to provide analysis "building blocks" that can be modified, combined, or used as-is to solve a variety of challenging problems. The MCMC approach used is implemented via textual scripts similar to a macro-type programming language. Accompanying each script is a graphical Bayesian network illustrating the elements of the script and the overall inference problem being solved. The book also covers the important topic of MCMC convergence.
This book discusses the psychological traits associated with drug consumption through the statistical analysis of a new database with information on 1885 respondents and use of 18 drugs. After reviewing published works on the psychological profiles of drug users and describing the data mining and machine learning methods used, it demonstrates that the personality traits (five factor model, impulsivity, and sensation seeking) together with simple demographic data make it possible to predict the risk of consumption of individual drugs with a sensitivity and specificity above 70% for most drugs. It also analyzes the correlations of use of different substances and describes the groups of drugs with correlated use, identifying significant differences in personality profiles for users of different drugs. The book is intended for advanced undergraduates and first-year PhD students, as well as researchers and practitioners. Although no previous knowledge of machine learning, advanced data mining concepts or modern psychology of personality is assumed, familiarity with basic statistics and some experience in the use of probabilities would be helpful. For a more detailed introduction to statistical methods, the book provides recommendations for undergraduate textbooks.
This book offers a comprehensive and accessible exposition of Euclidean Distance Matrices (EDMs) and rigidity theory of bar-and-joint frameworks. It is based on the one-to-one correspondence between EDMs and projected Gram matrices. Accordingly the machinery of semidefinite programming is a common thread that runs throughout the book. As a result, two parallel approaches to rigidity theory are presented. The first is traditional and more intuitive approach that is based on a vector representation of point configuration. The second is based on a Gram matrix representation of point configuration. Euclidean Distance Matrices and Their Applications in Rigidity Theory begins by establishing the necessary background needed for the rest of the book. The focus of Chapter 1 is on pertinent results from matrix theory, graph theory and convexity theory, while Chapter 2 is devoted to positive semidefinite (PSD) matrices due to the key role these matrices play in our approach. Chapters 3 to 7 provide detailed studies of EDMs, and in particular their various characterizations, classes, eigenvalues and geometry. Chapter 8 serves as a transitional chapter between EDMs and rigidity theory. Chapters 9 and 10 cover local and universal rigidities of bar-and-joint frameworks. This book is self-contained and should be accessible to a wide audience including students and researchers in statistics, operations research, computational biochemistry, engineering, computer science and mathematics.
This volume is composed of peer-reviewed papers that have developed from the First Conference of the International Society for NonParametric Statistics (ISNPS). This inaugural conference took place in Chalkidiki, Greece, June 15-19, 2012. It was organized with the co-sponsorship of the IMS, the ISI, and other organizations. M.G. Akritas, S.N. Lahiri, and D.N. Politis are the first executive committee members of ISNPS, and the editors of this volume. ISNPS has a distinguished Advisory Committee that includes Professors R.Beran, P.Bickel, R. Carroll, D. Cook, P. Hall, R. Johnson, B. Lindsay, E. Parzen, P. Robinson, M. Rosenblatt, G. Roussas, T. SubbaRao, and G. Wahba. The Charting Committee of ISNPS consists of more than 50 prominent researchers from all over the world. The chapters in this volume bring forth recent advances and trends in several areas of nonparametric statistics. In this way, the volume facilitates the exchange of research ideas, promotes collaboration among researchers from all over the world, and contributes to the further development of the field.The conference program included over 250 talks, including special invited talks, plenary talks, and contributed talks on all areas of nonparametric statistics. Out of these talks, some of the most pertinent ones have been refereed and developed into chapters that share both research and developments in the field.
This book presents state-of-the-art solution methods and applications of stochastic optimal control. It is a collection of extended papers discussed at the traditional Liverpool workshop on controlled stochastic processes with participants from both the east and the west. New problems are formulated, and progresses of ongoing research are reported. Topics covered in this book include theoretical results and numerical methods for Markov and semi-Markov decision processes, optimal stopping of Markov processes, stochastic games, problems with partial information, optimal filtering, robust control, Q-learning, and self-organizing algorithms. Real-life case studies and applications, e.g., queueing systems, forest management, control of water resources, marketing science, and healthcare, are presented. Scientific researchers and postgraduate students interested in stochastic optimal control,- as well as practitioners will find this book appealing and a valuable reference.
Since the early eighties, Ali Suleyman Ustunelhas beenone of the
main contributors to the field of Malliavin calculus. In a workshop
held in Paris, June 2010 several prominent researchers gave
exciting talks in honor of his 60th birthday. The present volume
includes scientific contributions from this workshop.
This book compiles and presents new developments in statistical causal inference. The accompanying data and computer programs are publicly available so readers may replicate the model development and data analysis presented in each chapter. In this way, methodology is taught so that readers may implement it directly. The book brings together experts engaged in causal inference research to present and discuss recent issues in causal inference methodological development. This is also a timely look at causal inference applied to scenarios that range from clinical trials to mediation and public health research more broadly. In an academic setting, this book will serve as a reference and guide to a course in causal inference at the graduate level (Master's or Doctorate). It is particularly relevant for students pursuing degrees in statistics, biostatistics, and computational biology. Researchers and data analysts in public health and biomedical research will also find this book to be an important reference.
A ground-breaking and practical treatment of probability and stochastic processes "A Modern Theory of Random Variation" is a new and radical re-formulation of the mathematical underpinnings of subjects as diverse as investment, communication engineering, and quantum mechanics. Setting aside the classical theory of probability measure spaces, the book utilizes a mathematically rigorous version of the theory of random variation that bases itself exclusively on finitely additive probability distribution functions. In place of twentieth century Lebesgue integration and measure theory, the author uses the simpler concept of Riemann sums, and the non-absolute Riemann-type integration of Henstock. Readers are supplied with an accessible approach to standard elements of probability theory such as the central limmit theorem and Brownian motion as well as remarkable, new results on Feynman diagrams and stochastic integrals. Throughout the book, detailed numerical demonstrations accompany the discussions of abstract mathematical theory, from the simplest elements of the subject to the most complex. In addition, an array of numerical examples and vivid illustrations showcase how the presented methods and applications can be undertaken at various levels of complexity. "A Modern Theory of Random Variation" is a suitable book for courses on mathematical analysis, probability theory, and mathematical finance at the upper-undergraduate and graduate levels. The book is also an indispensible resource for researchers and practitioners who are seeking new concepts, techniques and methodologies in data analysis, numerical calculation, and financial asset valuation. Patrick Muldowney, PhD, served as lecturer at the Magee Business School of the UNiversity of Ulster for over twenty years. Dr. Muldowney has published extensively in his areas of research, including integration theory, financial mathematics, and random variation.
Presents a unique study of Integrative Problem-Solving (IPS). The consideration of 'Decadence' is essential in the scientific study of environmental and other problems and their rigorous solution, because the broad context within which the problems emerge can affect their solution. Stochastic reasoning underlines the conceptual and methodological framework of IPS, and its formulation has a mathematical life of its own that accounts for the multidisciplinarity of real world problems, the multisourced uncertainties characterizing their solution, and the different thinking modes of the people involved. Only by interpolating between the full range of disciplines (including stochastic mathematics, physical science, neuropsychology, philosophy, and sociology) and the associated thinking modes can scientists arrive at a satisfactory account of problem-solving, and be able to distinguish between a technically complete problem-solution, and a solution that has social impact.
This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.
This book presents various recently developed and traditional statistical techniques, which are increasingly being applied in social science research. The social sciences cover diverse phenomena arising in society, the economy and the environment, some of which are too complex to allow concrete statements; some cannot be defined by direct observations or measurements; some are culture- (or region-) specific, while others are generic and common. Statistics, being a scientific method - as distinct from a 'science' related to any one type of phenomena - is used to make inductive inferences regarding various phenomena. The book addresses both qualitative and quantitative research (a combination of which is essential in social science research) and offers valuable supplementary reading at an advanced level for researchers.
This proceedings book highlights the latest research and developments in psychometrics and statistics. Featuring contributions presented at the 82nd Annual Meeting of the Psychometric Society (IMPS), organized by the University of Zurich and held in Zurich, Switzerland from July 17 to 21, 2017, its 34 chapters address a diverse range of psychometric topics including item response theory, factor analysis, causal inference, Bayesian statistics, test equating, cognitive diagnostic models and multistage adaptive testing. The IMPS is one of the largest international meetings on quantitative measurement in psychology, education and the social sciences, attracting over 500 participants and 250 paper presentations from around the world every year. This book gathers the contributions of selected presenters, which were subsequently expanded and peer-reviewed.
This book provides a general framework for learning sparse graphical models with conditional independence tests. It includes complete treatments for Gaussian, Poisson, multinomial, and mixed data; unified treatments for covariate adjustments, data integration, and network comparison; unified treatments for missing data and heterogeneous data; efficient methods for joint estimation of multiple graphical models; effective methods of high-dimensional variable selection; and effective methods of high-dimensional inference. The methods possess an embarrassingly parallel structure in performing conditional independence tests, and the computation can be significantly accelerated by running in parallel on a multi-core computer or a parallel architecture. This book is intended to serve researchers and scientists interested in high-dimensional statistics, and graduate students in broad data science disciplines. Key Features: A general framework for learning sparse graphical models with conditional independence tests Complete treatments for different types of data, Gaussian, Poisson, multinomial, and mixed data Unified treatments for data integration, network comparison, and covariate adjustment Unified treatments for missing data and heterogeneous data Efficient methods for joint estimation of multiple graphical models Effective methods of high-dimensional variable selection Effective methods of high-dimensional inference
In real-life decision-making situations it is necessary to make decisions with incomplete information, for oftentimes uncertain results. In "Decision-Making Under Uncertainty," Dr. Chacko applies his years of statistical research and experience to the analysis of twenty-four real-life decision-making situations, both those with few data points (eg: Cuban Missile Crisis), and many data points (eg: aspirin for heart attack prevention). These situations encompass decision-making in a variety of business, social and political, physical and biological, and military environments. Though different, all of these have one characteristic in common: their outcomes are uncertain/unkown, and unknowable. Chacko Demonstrates how the decision-maker can reduce uncertainty by choosing probable outcomes using the statistical methods he introduces. This detailed volume develops standard statistical concepts (t, x2, normal distribution, ANOVA), and the less familiar concepts (logical probability, subjective probability, Bayesian Inference, Penalty for Non-Fulfillment, Bluff-Threats Matrix, etc.). Chacko also offers a thorough discussion of the underlying theoretical principles. The end of each chapter contains a set of questions, three quarters of which focus on concepts, formulation, conclusion, resource commitments, and caveats; only one quarter with computations. Ideal for the practitioner, the work is also designed to serve as the primary text for graduate or advanced undergraduate courses in statistics and decision science.
This book presents the proceedings of the international conference Particle Systems and Partial Differential Equations I, which took place at the Centre of Mathematics of the University of Minho, Braga, Portugal, from the 5th to the 7th of December, 2012. The purpose of the conference was to bring together world leaders to discuss their topics of expertise and to present some of their latest research developments in those fields. Among the participants were researchers in probability, partial differential equations and kinetics theory. The aim of the meeting was to present to a varied public the subject of interacting particle systems, its motivation from the viewpoint of physics and its relation with partial differential equations or kinetics theory and to stimulate discussions and possibly new collaborations among researchers with different backgrounds. The book contains lecture notes written by Francois Golse on the derivation of hydrodynamic equations (compressible and incompressible Euler and Navier-Stokes) from the Boltzmann equation, and several short papers written by some of the participants in the conference. Among the topics covered by the short papers are hydrodynamic limits; fluctuations; phase transitions; motions of shocks and anti shocks in exclusion processes; large number asymptotics for systems with self-consistent coupling; quasi-variational inequalities; unique continuation properties for PDEs and others. The book will benefit probabilists, analysts and mathematicians who are interested in statistical physics, stochastic processes, partial differential equations and kinetics theory, along with physicists."
This book focuses on three core knowledge requirements for effective and thorough data analysis for solving business problems. These are a foundational understanding of: 1. statistical, econometric, and machine learning techniques; 2. data handling capabilities; 3. at least one programming language. Practical in orientation, the volume offers illustrative case studies throughout and examples using Python in the context of Jupyter notebooks. Covered topics include demand measurement and forecasting, predictive modeling, pricing analytics, customer satisfaction assessment, market and advertising research, and new product development and research. This volume will be useful to business data analysts, data scientists, and market research professionals, as well as aspiring practitioners in business data analytics. It can also be used in colleges and universities offering courses and certifications in business data analytics, data science, and market research.
Various general techniques have been developed for control and systems problems, many of which involve indirect methods. Because these indirect methods are not always effective, alternative approaches using direct methods are of particular interest and relevance given the advances of computing in recent years.The focus of this book, unique in the literature, is on direct methods, which are concerned with finding actual solutions to problems in control and systems, often algorithmic in nature. Throughout the work, deterministic and stochastic problems are examined from a unified perspective and with considerable rigor. Emphasis is placed on the theoretical basis of the methods and their potential utility in a broad range of control and systems problems.The book is an excellent reference for graduate students, researchers, applied mathematicians, and control engineers and may be used as a textbook for a graduate course or seminar on direct methods in control.
Aside from distribution theory, projections and the singular value decomposition (SVD) are the two most important concepts for understanding the basic mechanism of multivariate analysis. The former underlies the least squares estimation in regression analysis, which is essentially a projection of one subspace onto another, and the latter underlies principal component analysis, which seeks to find a subspace that captures the largest variability in the original space. This book is about projections and SVD. A thorough discussion of generalized inverse (g-inverse) matrices is also given because it is closely related to the former. The book provides systematic and in-depth accounts of these concepts from a unified viewpoint of linear transformations finite dimensional vector spaces. More specially, it shows that projection matrices (projectors) and g-inverse matrices can be defined in various ways so that a vector space is decomposed into a direct-sum of (disjoint) subspaces. Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition will be useful for researchers, practitioners, and students in applied mathematics, statistics, engineering, behaviormetrics, and other fields. |
![]() ![]() You may like...
Artificial Intelligence (AI) - Recent…
S. Kanimozhi Suguna, M. Dhivya, …
Hardcover
R5,094
Discovery Miles 50 940
Deep Learning in Gaming and Animations…
Vikas Chaudhary, Mool Chand Sharma, …
Hardcover
R3,570
Discovery Miles 35 700
Developing Windows-Based and Web-Enabled…
Nong Ye, Teresa Wu
Paperback
R2,009
Discovery Miles 20 090
Applications of Mathematical Modeling…
Madhu Jain, Dinesh K. Sharma, …
Hardcover
R4,933
Discovery Miles 49 330
Explainable Artificial Intelligence for…
Mohamed Lahby, Utku Kose, …
Hardcover
R3,134
Discovery Miles 31 340
|