![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book gives a comprehensive review of results for associated sequences and demimartingales developed so far, with special emphasis on demimartingales and related processes. Probabilistic properties of associated sequences, demimartingales and related processes are discussed in the first six chapters. Applications of some of these results to some problems in nonparametric statistical inference for such processes are investigated in the last three chapters.
The finite element method is a numerical method widely used in engineering. This reference text is the first to discuss finite element methods for structures with large stochastic variations. Graduate students, lecturers, and researchers in mathematics, engineering, and scientific computation will find this a very useful reference
VLSI CADhas greatly bene?ted from the use of reduced ordered Binary Decision Diagrams (BDDs) and the clausal representation as a problem of Boolean Satis?ability (SAT), e.g. in logic synthesis, ver- cation or design-for-testability. In recent practical applications, BDDs are optimized with respect to new objective functions for design space exploration. The latest trends show a growing number of proposals to fuse the concepts of BDD and SAT. This book gives a modern presentation of the established as well as of recent concepts. Latest results in BDD optimization are given, c- ering di?erent aspects of paths in BDDs and the use of e?cient lower bounds during optimization. The presented algorithms include Branch ? and Bound and the generic A -algorithm as e?cient techniques to - plore large search spaces. ? The A -algorithm originates from Arti?cial Intelligence (AI), and the EDA community has been unaware of this concept for a long time. Re- ? cently, the A -algorithm has been introduced as a new paradigm to explore design spaces in VLSI CAD. Besides AI search techniques, the book also discusses the relation to another ?eld of activity bordered to VLSI CAD and BDD optimization: the clausal representation as a SAT problem.
This book provides a comprehensive review of environmental benefit transfer methods, issues and challenges, covering topics relevant to researchers and practitioners. Early chapters provide accessible introductory materials suitable for non-economists. These chapters also detail how benefit transfer is used within the policy process. Later chapters cover more advanced topics suited to valuation researchers, graduate students and those with similar knowledge of economic and statistical theory and methods. This book provides the most complete coverage of environmental benefit transfer methods available in a single location. The book targets a wide audience, including undergraduate and graduate students, practitioners in economics and other disciplines looking for a one-stop handbook covering benefit transfer topics and those who wish to apply or evaluate benefit transfer methods. It is designed for those both with and without training in economics
Multiparameter processes extend the existing one-parameter theory of random processes in an elegant way, and have found connections to diverse disciplines such as probability theory, real and functional analysis, group theory, analytic number theory, and group renormalization in mathematical physics, to name a few. This book lays the foundation of aspects of the rapidly developing subject of random fields, and is designed for a second graduate course in probability and beyond. Its intended audience is pure, as well as applied, mathematicians.
In earlier forewords to the books in this series on Discrete Event Dynamic Systems (DEDS), we have dwelt on the pervasive nature of DEDS in our human-made world. From manufacturing plants to computer/communication networks, from traffic systems to command-and-control, modern civilization cannot function without the smooth operation of such systems. Yet mathemat ical tools for the analysis and synthesis of DEDS are nascent when compared to the well developed machinery of the continuous variable dynamic systems char acterized by differential equations. The performance evaluation tool of choice for DEDS is discrete event simulation both on account of its generality and its explicit incorporation of randomness. As it is well known to students of simulation, the heart of the random event simulation is the uniform random number generator. Not so well known to the practitioners are the philosophical and mathematical bases of generating "random" number sequence from deterministic algorithms. This editor can still recall his own painful introduction to the issues during the early 80's when he attempted to do the first perturbation analysis (PA) experiments on a per sonal computer which, unbeknownst to him, had a random number generator with a period of only 32,768 numbers. It is no exaggeration to say that the development of PA was derailed for some time due to this ignorance of the fundamentals of random number generation."
Steady progress in recent years has been made in understanding the special mathematical features of certain exactly solvable models in statistical mechanics and quantum field theory, including the scaling limits of the 2-D Ising (lattice) model, and more generally, a class of 2-D quantum fields known as holonomic fields. New results have made it possible to obtain a detailed nonperturbative analysis of the multi-spin correlations. In particular, the book focuses on deformation analysis of the scaling functions of the Ising model, and will appeal to graduate students, mathematicians, and physicists interested in the mathematics of statistical mechanics and quantum field theory.
Robust statistics is an extension of classical statistics that specifically takes into account the concept that the underlying models used to describe data are only approximate. Its basic philosophy is to produce statistical procedures which are stable when the data do not exactly match the postulated models as it is the case for example with outliers. "Robust Methods in Biostatistics" proposes robust alternatives to common methods used in statistics in general and in biostatistics in particular and illustrates their use on many biomedical datasets. The methods introduced include robust estimation, testing, model selection, model check and diagnostics. They are developed for the following general classes of models: Linear regressionGeneralized linear modelsLinear mixed modelsMarginal longitudinal data modelsCox survival analysis model The methods are introduced both at a theoretical and applied level within the framework of each general class of models, with a particular emphasis put on practical data analysis. This book is of particular use for research students, applied statisticians and practitioners in the health field interested in more stable statistical techniques. An accompanying website provides R code for computing all of the methods described, as well as for analyzing all the datasets used in the book.
The book deals with some of the fundamental issues of risk assessment in grid computing environments. The book describes the development of a hybrid probabilistic and possibilistic model for assessing the success of a computing task in a grid environment
Single Subject Designs in Biomedicine draws upon the rich history of single case research within the educational and behavioral research settings and extends the application to the field of biomedicine. Biomedical illustrations are used to demonstrate the processes of designing, implementing, and evaluating a single subject design. Strengths and limitations of various methodologies are presented, along with specific clinical areas of application in which these applications would be appropriate. Statistical and visual techniques for data analysis are also discussed. The breadth and depth of information provided is suitable for medical students in research oriented courses, primary care practitioners and medical specialists seeking to apply methods of evidence practice to improve patient care, and medical researchers who are expanding their methodological expertise to include single subject designs. Increasing awareness of the utility in the single subject design could enhance treatment approach and evaluation both in biomedical research and medical care settings.
The contributions in this book focus on a variety of topics related to discrepancy theory, comprising Fourier techniques to analyze discrepancy, low discrepancy point sets for quasi-Monte Carlo integration, probabilistic discrepancy bounds, dispersion of point sets, pair correlation of sequences, integer points in convex bodies, discrepancy with respect to geometric shapes other than rectangular boxes, and also open problems in discrepany theory.
The concept of ridges has appeared numerous times in the image processing liter ature. Sometimes the term is used in an intuitive sense. Other times a concrete definition is provided. In almost all cases the concept is used for very specific ap plications. When analyzing images or data sets, it is very natural for a scientist to measure critical behavior by considering maxima or minima of the data. These critical points are relatively easy to compute. Numerical packages always provide support for root finding or optimization, whether it be through bisection, Newton's method, conjugate gradient method, or other standard methods. It has not been natural for scientists to consider critical behavior in a higher-order sense. The con cept of ridge as a manifold of critical points is a natural extension of the concept of local maximum as an isolated critical point. However, almost no attention has been given to formalizing the concept. There is a need for a formal development. There is a need for understanding the computation issues that arise in the imple mentations. The purpose of this book is to address both needs by providing a formal mathematical foundation and a computational framework for ridges. The intended audience for this book includes anyone interested in exploring the use fulness of ridges in data analysis."
Computational inference is based on an approach to statistical methods that uses modern computational power to simulate distributional properties of estimators and test statistics. This book describes computationally intensive statistical methods in a unified presentation, emphasizing techniques, such as the PDF decomposition, that arise in a wide range of methods.
Noted for its integration of real-world data and case studies, this text offers sound coverage of the theoretical aspects of mathematical statistics. The authors demonstrate how and when to use statistical methods, while reinforcing the calculus that students have mastered in previous courses. Throughout the Fifth Edition, the authors have added and updated examples and case studies, while also refining existing features that show a clear path from theory to practice.
The main theme of this monograph is "comparative statistical inference. " While the topics covered have been carefully selected (they are, for example, restricted to pr- lems of statistical estimation), my aim is to provide ideas and examples which will assist a statistician, or a statistical practitioner, in comparing the performance one can expect from using either Bayesian or classical (aka, frequentist) solutions in - timation problems. Before investing the hours it will take to read this monograph, one might well want to know what sets it apart from other treatises on comparative inference. The two books that are closest to the present work are the well-known tomes by Barnett (1999) and Cox (2006). These books do indeed consider the c- ceptual and methodological differences between Bayesian and frequentist methods. What is largely absent from them, however, are answers to the question: "which - proach should one use in a given problem?" It is this latter issue that this monograph is intended to investigate. There are many books on Bayesian inference, including, for example, the widely used texts by Carlin and Louis (2008) and Gelman, Carlin, Stern and Rubin (2004). These books differ from the present work in that they begin with the premise that a Bayesian treatment is called for and then provide guidance on how a Bayesian an- ysis should be executed. Similarly, there are many books written from a classical perspective.
During the last decades, there has been an explosion in computation and information technology. This development comes with an expansion of complex observational studies and clinical trials in a variety of fields such as medicine, biology, epidemiology, sociology, and economics among many others, which involve collection of large amounts of data on subjects or organisms over time. The goal of such studies can be formulated as estimation of a finite dimensional parameter of the population distribution corresponding to the observed time-dependent process. Such estimation problems arise in survival analysis, causal inference and regression analysis. This book provides a fundamental statistical framework for the analysis of complex longitudinal data. It provides the first comprehensive description of optimal estimation techniques based on time-dependent data structures subject to informative censoring and treatment assignment in so called semiparametric models. Semiparametric models are particularly attractive since they allow the presence of large unmodeled nuisance parameters. These techniques include estimation of regression parameters in the familiar (multivariate) generalized linear regression and multiplicative intensity models. They go beyond standard statistical approaches by incorporating all the observed data to allow for informative censoring, to obtain maximal efficiency, and by developing estimators of causal effects. It can be used to teach masters and Ph.D. students in biostatistics and statistics and is suitable for researchers in statistics with a strong interest in the analysis of complex longitudinal data.
Microarrays for simultaneous measurement of redundancy of RNA species are used in fundamental biology as well as in medical research. Statistically, a microarray may be considered as an observation of very high dimensionality equal to the number of expression levels measured on it. In "Statistical Methods for Microarray Data Analysis: Methods and Protocols, " expert researchers in the field detail many methods and techniques used to study microarrays, guiding the reader from microarray technology to statistical problems of specific multivariate data analysis. Written in the highly successful "Methods in Molecular Biology " series format, the chapters include the kind of detailed description and implementation advice that is crucial for getting optimal results in the laboratory. Thorough and intuitive, "Statistical Methods for Microarray Data Analysis: ""Methods and Protocols "aids scientists in continuing to study microarrays and the most current statistical methods.
Covering CUSUMs from an application-oriented viewpoint, while also providing the essential theoretical underpinning, this is an accessible guide for anyone with a basic statistical training. The text is aimed at quality practitioners, teachers and students of quality methodologies, and people interested in analysis of time-ordered data. Further support is available from a Web site containing CUSUM software and data sets.
High dimensional probability, in the sense that encompasses the topics rep resented in this volume, began about thirty years ago with research in two related areas: limit theorems for sums of independent Banach space valued random vectors and general Gaussian processes. An important feature in these past research studies has been the fact that they highlighted the es sential probabilistic nature of the problems considered. In part, this was because, by working on a general Banach space, one had to discard the extra, and often extraneous, structure imposed by random variables taking values in a Euclidean space, or by processes being indexed by sets in R or Rd. Doing this led to striking advances, particularly in Gaussian process theory. It also led to the creation or introduction of powerful new tools, such as randomization, decoupling, moment and exponential inequalities, chaining, isoperimetry and concentration of measure, which apply to areas well beyond those for which they were created. The general theory of em pirical processes, with its vast applications in statistics, the study of local times of Markov processes, certain problems in harmonic analysis, and the general theory of stochastic processes are just several of the broad areas in which Gaussian process techniques and techniques from probability in Banach spaces have made a substantial impact. Parallel to this work on probability in Banach spaces, classical proba bility and empirical process theory were enriched by the development of powerful results in strong approximations."
This book comprises nine selected works on numerical and computational methods for solving multiobjective optimization, game theory, and machine learning problems. It provides extended versions of selected papers from various fields of science such as computer science, mathematics and engineering that were presented at EVOLVE 2013 held in July 2013 at Leiden University in the Netherlands. The internationally peer-reviewed papers include original work on important topics in both theory and applications, such as the role of diversity in optimization, statistical approaches to combinatorial optimization, computational game theory, and cell mapping techniques for numerical landscape exploration. Applications focus on aspects including robustness, handling multiple objectives, and complex search spaces in engineering design and computational biology.
This volume has been created in honor of the seventieth birthday of Ted Harris, which was celebrated on January 11th, 1989. The papers rep resent the wide range of subfields of probability theory in which Ted has made profound and fundamental contributions. This breadth in Ted's research complicates the task of putting together in his honor a book with a unified theme. One common thread noted was the spatial, or geometric, aspect of the phenomena Ted investigated. This volume has been organized around that theme, with papers covering four major subject areas of Ted's research: branching processes, percola tion, interacting particle systems, and stochastic flows. These four topics do not* exhaust his research interests; his major work on Markov chains is commemorated in the standard technology "Harris chain" and "Harris recurrent" . The editors would like to take this opportunity to thank the speakers at the symposium and the contributors to this volume. Their enthusi astic support is a tribute to Ted Harris. We would like to express our appreciation to Annette Mosley for her efforts in typing the manuscripts and to Arthur Ogawa for typesetting the volume. Finally, we gratefully acknowledge the National Science Foundation and the University of South ern California for their financial support.
This volume is devoted to the most recent discoveries in mathematics and statistics. It also serves as a platform for knowledge and information exchange between experts from industrial and academic sectors. The book covers a wide range of topics, including mathematical analyses, probability, statistics, algebra, geometry, mathematical physics, wave propagation, stochastic processes, ordinary and partial differential equations, boundary value problems, linear operators, cybernetics and number and functional theory. It is a valuable resource for pure and applied mathematicians, statisticians, engineers and scientists.
The theory of Markov Decision Processes - also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming - studies sequential optimization of discrete time stochastic systems. Fundamentally, this is a methodology that examines and analyzes a discrete-time stochastic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. Its objective is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types of impacts: (i) they cost or save time, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view of future events. Markov Decision Processes (MDPs) model this paradigm and provide results on the structure and existence of good policies and on methods for their calculations. MDPs are attractive to many researchers because they are important both from the practical and the intellectual points of view. MDPs provide tools for the solution of important real-life problems. In particular, many business and engineering applications use MDP models. Analysis of various problems arising in MDPs leads to a large variety of interesting mathematical and computational problems. Accordingly, the Handbook of Markov Decision Processes is split into three parts: Part I deals with models with finite state and action spaces and Part II deals with infinite state problems, and Part IIIexamines specific applications. Individual chapters are written by leading experts on the subject.
This book describes the properties of stochastic probabilistic models and develops the applied mathematics of stochastic point processes. It is useful to students and research workers in probability and statistics and also to research workers wishing to apply stochastic point processes. |
You may like...
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Stochastic Processes and Their…
Christo Ananth, N. Anbazhagan, …
Hardcover
R6,687
Discovery Miles 66 870
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|