![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The aim of this book is to provide a strong theoretical support for understanding and analyzing the behavior of evolutionary algorithms, as well as for creating a bridge between probability, set-oriented numerics and evolutionary computation. The volume encloses a collection of contributions that were presented at the EVOLVE 2011 international workshop, held in Luxembourg, May 25-27, 2011, coming from invited speakers and also from selected regular submissions. The aim of EVOLVE is to unify the perspectives offered by probability, set oriented numerics and evolutionary computation. EVOLVE focuses on challenging aspects that arise at the passage from theory to new paradigms and practice, elaborating on the foundations of evolutionary algorithms and theory-inspired methods merged with cutting-edge techniques that ensure performance guarantee factors. EVOLVE is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. The chapters enclose challenging theoretical findings, concrete optimization problems as well as new perspectives. By gathering contributions from researchers with different backgrounds, the book is expected to set the basis for a unified view and vocabulary where theoretical advancements may echo in different domains.
This book provides a systematic in-depth analysis of nonparametric regression with random design. It covers almost all known estimates such as classical local averaging estimates including kernel, partitioning and nearest neighbor estimates, least squares estimates using splines, neural networks and radial basis function networks, penalized least squares estimates, local polynomial kernel estimates, and orthogonal series estimates. The emphasis is on distribution-free properties of the estimates. Most consistency results are valid for all distributions of the data. Whenever it is not possible to derive distribution-free results, as in the case of the rates of convergence, the emphasis is on results which require as few constrains on distributions as possible, on distribution-free inequalities, and on adaptation. The relevant mathematical theory is systematically developed and requires only a basic knowledge of probability theory. The book will be a valuable reference for anyone interested in nonparametric regression and is a rich source of many useful mathematical techniques widely scattered in the literature. In particular, the book introduces the reader to empirical process theory, martingales and approximation properties of neural networks.
"Statistical Modeling, Analysis and Management of Fuzzy Data," or SMFD for short, is an important contribution to a better understanding of a basic issue -an issue which has been controversial, and still is though to a lesser degree. In substance, the issue is: are fuzziness and randomness distinct or coextensive facets of uncertainty? Are the theories of fuzziness and random ness competitive or complementary? In SMFD, these and related issues are addressed with rigor, authority and insight by prominent contributors drawn, in the main, from probability theory, fuzzy set theory and data analysis com munities. First, a historical perspective. The almost simultaneous births -close to half a century ago-of statistically-based information theory and cybernetics were two major events which marked the beginning of the steep ascent of probability theory and statistics in visibility, influence and importance. I was a student when information theory and cybernetics were born, and what is etched in my memory are the fascinating lectures by Shannon and Wiener in which they sketched their visions of the coming era of machine intelligence and automation of reasoning and decision processes. What I heard in those lectures inspired one of my first papers (1950) "An Extension of Wiener's Theory of Prediction," and led to my life-long interest in probability theory and its applications to information processing, decision analysis and control."
In applications, and especially in mathematical finance, random
time-dependent events are often modeled as stochastic processes.
Assumptions are made about the structure of such processes, and
serious researchers will want to justify those assumptions through
the use of data. As statisticians are wont to say, "In God we
trust; all others must bring data."
Scientists and engineers often have to deal with systems that exhibit random or unpredictable elements and must effectively evaluate probabilities in each situation. Computer simulations, while the traditional tool used to solve such problems, are limited in the scale and complexity of the problems they can solve. Formalized Probability Theory and Applications Using Theorem Proving discusses some of the limitations inherent in computer systems when applied to problems of probabilistic analysis, and presents a novel solution to these limitations, combining higher-order logic with computer-based theorem proving. Combining practical application with theoretical discussion, this book is an important reference tool for mathematicians, scientists, engineers, and researchers in all STEM fields.
This IMA Volume in Mathematics and its Applications RANDOM SETS: THEORY AND APPLICATIONS is based on the proceedings of a very successful 1996 three-day Summer Program on "Application and Theory of Random Sets." We would like to thank the scientific organizers: John Goutsias (Johns Hopkins University), Ronald P.S. Mahler (Lockheed Martin), and Hung T. Nguyen (New Mexico State University) for their excellent work as organizers of the meeting and for editing the proceedings. We also take this opportunity to thank the Army Research Office (ARO), the Office ofNaval Research (0NR), and the Eagan, MinnesotaEngineering Center ofLockheed Martin Tactical Defense Systems, whose financial support made the summer program possible. Avner Friedman Robert Gulliver v PREFACE "Later generations will regard set theory as a disease from which one has recovered. " - Henri Poincare Random set theory was independently conceived by D.G. Kendall and G. Matheron in connection with stochastic geometry. It was however G.
This monograph surveys the theory of quantitative homogenization for second-order linear elliptic systems in divergence form with rapidly oscillating periodic coefficients in a bounded domain. It begins with a review of the classical qualitative homogenization theory, and addresses the problem of convergence rates of solutions. The main body of the monograph investigates various interior and boundary regularity estimates that are uniform in the small parameter e>0. Additional topics include convergence rates for Dirichlet eigenvalues and asymptotic expansions of fundamental solutions, Green functions, and Neumann functions. The monograph is intended for advanced graduate students and researchers in the general areas of analysis and partial differential equations. It provides the reader with a clear and concise exposition of an important and currently active area of quantitative homogenization.
Showcases the excellent data science environment in Python. Provides examples for readers to replicate, adapt, extend, and improve. Covers the crucial knowledge needed by geographic data scientist.
Generalising classical concepts of probability theory, the investigation of operator (semi)-stable laws as possible limit distributions of operator-normalized sums of i.i.d. random variable on finite-dimensional vector space started in 1969. Currently, this theory is still in progress and promises interesting applications. Parallel to this, similar stability concepts for probabilities on groups were developed during recent decades. It turns out that the existence of suitable limit distributions has a strong impact on the structure of both the normalizing automorphisms and the underlying group. Indeed, investigations in limit laws led to contractable groups and - at least within the class of connected groups - to homogeneous groups, in particular to groups that are topologically isomorphic to a vector space. Moreover, it has been shown that (semi)-stable measures on groups have a vector space counterpart and vice versa. The purpose of this book is to describe the structure of limit laws and the limit behaviour of normalized i.i.d. random variables on groups and on finite-dimensional vector spaces from a common point of view. This will also shed a new light on the classical situation. Chapter 1 provides an introduction to stability problems on vector spaces. Chapter II is concerned with parallel investigations for homogeneous groups and in Chapter III the situation beyond homogeneous Lie groups is treated. Throughout, emphasis is laid on the description of features common to the group- and vector space situation. Chapter I can be understood by graduate students with some background knowledge in infinite divisibility. Readers of Chapters II and III are assumed to be familiar with basic techniques from probability theory on locally compact groups.
This book gives a self-contained introduction to the dynamic martingale approach to marked point processes (MPP). Based on the notion of a compensator, this approach gives a versatile tool for analyzing and describing the stochastic properties of an MPP. In particular, the authors discuss the relationship of an MPP to its compensator and particular classes of MPP are studied in great detail. The theory is applied to study properties of dependent marking and thinning, to prove results on absolute continuity of point process distributions, to establish sufficient conditions for stochastic ordering between point and jump processes, and to solve the filtering problem for certain classes of MPPs.
Chances Are is the first book to make statistics accessible to everyone, regardless of how much math you remember from school. Do percentages confuse you? Can you tell the difference among a mean, median, and mode? Steve Slavin can help With Chances Are, you can actually teach yourself all the statistics you will ever need.
Primary Audience for the Book * Specialists in numerical computations who are interested in algorithms with automatic result verification. * Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. * Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the operands range over the domain. For example, [0. 9,1. 1] + [2. 9,3. 1] = [3. 8,4. 2], where [3. 8,4. 2] = {x + ylx E [0. 9,1. 1] and y E [3. 8,4. 2]}. The power of interval arithmetic comes from the fact that (i) the elementary operations and standard functions can be computed for intervals with formulas and subroutines; and (ii) directed roundings can be used, so that the images of these operations (e. g.
The aim of this graduate textbook is to provide a comprehensive advanced course in the theory of statistics covering those topics in estimation, testing, and large sample theory which a graduate student might typically need to learn as preparation for work on a Ph.D. An important strength of this book is that it provides a mathematically rigorous and even-handed account of both Classical and Bayesian inference in order to give readers a broad perspective. For example, the "uniformly most powerful" approach to testing is contrasted with available decision-theoretic approaches.
Machine learning is a novel discipline concerned with the analysis of large and multiple variables data. It involves computationally intensive methods, like factor analysis, cluster analysis, and discriminant analysis. It is currently mainly the domain of computer scientists, and is already commonly used in social sciences, marketing research, operational research and applied sciences. It is virtually unused in clinical research. This is probably due to the traditional belief of clinicians in clinical trials where multiple variables are equally balanced by the randomization process and are not further taken into account. In contrast, modern computer data files often involve hundreds of variables like genes and other laboratory values, and computationally intensive methods are required. This book was written as a hand-hold presentation accessible to clinicians, and as a must-read publication for those new to the methods.
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
Recent developments show that probability methods have become a very powerful tool in such different areas as statistical physics, dynamical systems, Riemannian geometry, group theory, harmonic analysis, graph theory and computer science. This volume is an outcome of the special semester 2001 - Random Walks held at the Schroedinger Institute in Vienna, Austria. It contains original research articles with non-trivial new approaches based on applications of random walks and similar processes to Lie groups, geometric flows, physical models on infinite graphs, random number generators, Lyapunov exponents, geometric group theory, spectral theory of graphs and potential theory. Highlights are the first survey of the theory of the stochastic Loewner evolution and its applications to percolation theory (a new rapidly developing and very promising subject at the crossroads of probability, statistical physics and harmonic analysis), surveys on expander graphs, random matrices and quantum chaos, cellular automata and symbolic dynamical systems, and others. The contributors to the volume are the leading experts in the area. The book will provide a valuable source both for active researchers and graduate students in the respective fields.
Over the last fifteen years fractal geometry has established itself as a substantial mathematical theory in its own right. The interplay between fractal geometry, analysis and stochastics has highly influenced recent developments in mathematical modeling of complicated structures. This process has been forced by problems in these areas related to applications in statistical physics, biomathematics and finance. This book is a collection of survey articles covering many of the most recent developments, like Schramm-Loewner evolution, fractal scaling limits, exceptional sets for percolation, and heat kernels on fractals. The authors were the keynote speakers at the conference "Fractal Geometry and Stochastics IV" at Greifswald in September 2008.
This monograph provides, for the first time, a most comprehensive statistical account of composite sampling as an ingenious environmental sampling method to help accomplish observational economy in a variety of environmental and ecological studies. Sampling consists of selection, acquisition, and quantification of a part of the population. But often what is desirable is not affordable, and what is affordable is not adequate. How do we deal with this dilemma? Operationally, composite sampling recognizes the distinction between selection, acquisition, and quantification. In certain applications, it is a common experience that the costs of selection and acquisition are not very high, but the cost of quantification, or measurement, is substantially high. In such situations, one may select a sample sufficiently large to satisfy the requirement of representativeness and precision and then, by combining several sampling units into composites, reduce the cost of measurement to an affordable level. Thus composite sampling offers an approach to deal with the classical dilemma of desirable versus affordable sample sizes, when conventional statistical methods fail to resolve the problem. Composite sampling, at least under idealized conditions, incurs no loss of information for estimating the population means. But an important limitation to the method has been the loss of information on individual sample values, such as the extremely large value. In many of the situations where individual sample values are of interest or concern, composite sampling methods can be suitably modified to retrieve the information on individual sample values that may be lost due to compositing. In this monograph, we present statistical solutions to these and other issues that arise in the context of applications of composite sampling. Content Level Research
This book is mainly based on the Cramir--Chernoff renowned theorem, which deals with the 'rough' logarithmic asymptotics of the distribution of sums of independent, identically distributed random variables. The authors approach primarily the extensions of this theory to dependent, and in particular, nonmarkovian cases on function spaces. Recurrent algorithms of identification and adaptive control form the main examples behind the large deviation problems in this volume. The first part of the book exploits some ideas and concepts of the martingale approach, especially the concept of the stochastic exponential. The second part of the book covers Freindlin's approach, based on the Frobenius-type theorems for positive operators, which prove to be effective for the cases in consideration.
Bayesian nonparametrics has grown tremendously in the last three decades, especially in the last few years. This book is the first systematic treatment of Bayesian nonparametric methods and the theory behind them. While the book is of special interest to Bayesians, it will also appeal to statisticians in general because Bayesian nonparametrics offers a whole continuous spectrum of robust alternatives to purely parametric and purely nonparametric methods of classical statistics. The book is primarily aimed at graduate students and can be used as the text for a graduate course in Bayesian nonparametrics. Though the emphasis of the book is on nonparametrics, there is a substantial chapter on asymptotics of classical Bayesian parametric models. Jayanta Ghosh has been Director and Jawaharlal Nehru Professor at the Indian Statistical Institute and President of the International Statistical Institute. He is currently professor of statistics at Purdue University. He has been editor of Sankhya and served on the editorial boards of several journals including the Annals of Statistics. Apart from Bayesian analysis, his interests include asymptotics, stochastic modeling, high dimensional model selection, reliability and survival analysis and bioinformatics. R.V. Ramamoorthi is professor at the Department of Statistics and Probability at Michigan State University. He has published papers in the areas of sufficiency invariance, comparison of experiments, nonparametric survival analysis and Bayesian analysis. In addition to Bayesian nonparametrics, he is currently interested in Bayesian networks and graphical models. He is on the editorial board of Sankhya.
A "health disparity" refers to a higher burden of illness, injury, disability, or mortality experienced by one group relative to another. These disparities may be due to many factors including age, income, race, etc. This book will focus on their estimation, ranging from classical approaches including the quantification of a disparity, to more formal modelling, to modern approaches involving more flexible computational approaches. Features: * Presents an overview of methods and applications of health disparity estimation * First book to synthesize research in this field in a unified statistical framework * Covers classical approaches, and builds to more modern computational techniques * Includes many worked examples and case studies using real data * Discusses available software for estimation The book is designed primarily for researchers and graduate students in biostatistics, data science, and computer science. It will also be useful to many quantitative modelers in genetics, biology, sociology, and epidemiology.
This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a broad interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: * An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. * A modeling-based approach with emphasis on identification of models * Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. * A chapter on matrix-analytic method as an alternative to the traditional methods of analysis of queueing systems. * A comprehensive treatment of statistical inference for queueing systems. * Modeling exercises and review exercises when appropriate. The second edition of An Introduction of Queueing Theory may be used as a textbook by first-year graduate students in fields such as computer science, operations research, industrial and systems engineering, as well as related fields such as manufacturing and communications engineering. Upper-level undergraduate students in mathematics, statistics, and engineering may also use the book in an introductory course on queueing theory. With its rigorous coverage of basic material and extensive bibliography of the queueing literature, the work may also be useful to applied scientists and practitioners as a self-study reference for applications and further research. "...This book has brought a freshness and novelty as it deals mainly with modeling and analysis in applications as well as with statistical inference for queueing problems. With his 40 years of valuable experience in teaching and high level research in this subject area, Professor Bhat has been able to achieve what he aimed: to make [the work] somewhat different in content and approach from other books." - Assam Statistical Review of the first edition |
You may like...
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|