![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Recent developments show that probability methods have become a very powerful tool in such different areas as statistical physics, dynamical systems, Riemannian geometry, group theory, harmonic analysis, graph theory and computer science. This volume is an outcome of the special semester 2001 - Random Walks held at the Schroedinger Institute in Vienna, Austria. It contains original research articles with non-trivial new approaches based on applications of random walks and similar processes to Lie groups, geometric flows, physical models on infinite graphs, random number generators, Lyapunov exponents, geometric group theory, spectral theory of graphs and potential theory. Highlights are the first survey of the theory of the stochastic Loewner evolution and its applications to percolation theory (a new rapidly developing and very promising subject at the crossroads of probability, statistical physics and harmonic analysis), surveys on expander graphs, random matrices and quantum chaos, cellular automata and symbolic dynamical systems, and others. The contributors to the volume are the leading experts in the area. The book will provide a valuable source both for active researchers and graduate students in the respective fields.
Study smarter and stay on top of your probability course with the bestselling Schaum's Outline-now with the NEW Schaum's app and website! Schaum's Outline of Probability, Third Edition is the go-to study guide for help in probability courses. It's ideal for undergrads, graduate students and professionals needing a tool for review. With an outline format that facilitates quick and easy review and mirrors the course in scope and sequence, this book helps you understand basic concepts and get the extra practice you need to excel in the course. Schaum's Outline of Probability, Third Edition supports the bestselling textbooks and is useful for a variety of classes, including Elementary Probability and Statistics, Data Analysis, Finite Mathematics, and many other courses. You'll find coverage on finite and countable sets, binomial coefficients, axioms of probability, conditional probability, expectation of a finite random variable, Poisson distribution, and probability of vectors and Stochastic matrices. Also included: finite Stochastic and tree diagrams, Chebyshev's inequality and the law of large numbers, calculations of binomial probabilities using normal approximation, and regular Markov processes and stationary state distributions. Features *NEW to this edition: the new Schaum's app and website! *NEW to this edition: 20 NEW problem-solving videos online *430 solved problems *Outline format to provide a concise guide to the standard college course in probability *Clear, concise explanations of probability concepts *Supports these major texts: Elementary Statistics: A Step by Step Approach (Bluman), Mathematics with Applications (Hungerford), and Discrete Mathematics and Its Applications (Rosen) *Appropriate for the following courses: Elementary Probability and Statistics, Data Analysis, Finite Mathematics, Introduction to Mathematical Statistics, Mathematics for Biological Sciences, Introductory Statistics, Discrete Mathematics, Probability for Applied Science, and Introduction to Probability Theory
This monograph provides, for the first time, a most comprehensive statistical account of composite sampling as an ingenious environmental sampling method to help accomplish observational economy in a variety of environmental and ecological studies. Sampling consists of selection, acquisition, and quantification of a part of the population. But often what is desirable is not affordable, and what is affordable is not adequate. How do we deal with this dilemma? Operationally, composite sampling recognizes the distinction between selection, acquisition, and quantification. In certain applications, it is a common experience that the costs of selection and acquisition are not very high, but the cost of quantification, or measurement, is substantially high. In such situations, one may select a sample sufficiently large to satisfy the requirement of representativeness and precision and then, by combining several sampling units into composites, reduce the cost of measurement to an affordable level. Thus composite sampling offers an approach to deal with the classical dilemma of desirable versus affordable sample sizes, when conventional statistical methods fail to resolve the problem. Composite sampling, at least under idealized conditions, incurs no loss of information for estimating the population means. But an important limitation to the method has been the loss of information on individual sample values, such as the extremely large value. In many of the situations where individual sample values are of interest or concern, composite sampling methods can be suitably modified to retrieve the information on individual sample values that may be lost due to compositing. In this monograph, we present statistical solutions to these and other issues that arise in the context of applications of composite sampling. Content Level Research
This book is mainly based on the Cramir--Chernoff renowned theorem, which deals with the 'rough' logarithmic asymptotics of the distribution of sums of independent, identically distributed random variables. The authors approach primarily the extensions of this theory to dependent, and in particular, nonmarkovian cases on function spaces. Recurrent algorithms of identification and adaptive control form the main examples behind the large deviation problems in this volume. The first part of the book exploits some ideas and concepts of the martingale approach, especially the concept of the stochastic exponential. The second part of the book covers Freindlin's approach, based on the Frobenius-type theorems for positive operators, which prove to be effective for the cases in consideration.
Bayesian nonparametrics has grown tremendously in the last three decades, especially in the last few years. This book is the first systematic treatment of Bayesian nonparametric methods and the theory behind them. While the book is of special interest to Bayesians, it will also appeal to statisticians in general because Bayesian nonparametrics offers a whole continuous spectrum of robust alternatives to purely parametric and purely nonparametric methods of classical statistics. The book is primarily aimed at graduate students and can be used as the text for a graduate course in Bayesian nonparametrics. Though the emphasis of the book is on nonparametrics, there is a substantial chapter on asymptotics of classical Bayesian parametric models. Jayanta Ghosh has been Director and Jawaharlal Nehru Professor at the Indian Statistical Institute and President of the International Statistical Institute. He is currently professor of statistics at Purdue University. He has been editor of Sankhya and served on the editorial boards of several journals including the Annals of Statistics. Apart from Bayesian analysis, his interests include asymptotics, stochastic modeling, high dimensional model selection, reliability and survival analysis and bioinformatics. R.V. Ramamoorthi is professor at the Department of Statistics and Probability at Michigan State University. He has published papers in the areas of sufficiency invariance, comparison of experiments, nonparametric survival analysis and Bayesian analysis. In addition to Bayesian nonparametrics, he is currently interested in Bayesian networks and graphical models. He is on the editorial board of Sankhya.
Over the last fifteen years fractal geometry has established itself as a substantial mathematical theory in its own right. The interplay between fractal geometry, analysis and stochastics has highly influenced recent developments in mathematical modeling of complicated structures. This process has been forced by problems in these areas related to applications in statistical physics, biomathematics and finance. This book is a collection of survey articles covering many of the most recent developments, like Schramm-Loewner evolution, fractal scaling limits, exceptional sets for percolation, and heat kernels on fractals. The authors were the keynote speakers at the conference "Fractal Geometry and Stochastics IV" at Greifswald in September 2008.
A "health disparity" refers to a higher burden of illness, injury, disability, or mortality experienced by one group relative to another. These disparities may be due to many factors including age, income, race, etc. This book will focus on their estimation, ranging from classical approaches including the quantification of a disparity, to more formal modelling, to modern approaches involving more flexible computational approaches. Features: * Presents an overview of methods and applications of health disparity estimation * First book to synthesize research in this field in a unified statistical framework * Covers classical approaches, and builds to more modern computational techniques * Includes many worked examples and case studies using real data * Discusses available software for estimation The book is designed primarily for researchers and graduate students in biostatistics, data science, and computer science. It will also be useful to many quantitative modelers in genetics, biology, sociology, and epidemiology.
Provides a comprehensive and accessible introduction to general insurance pricing, based on the author’s many years of experience as both a teacher and practitioner. Suitable for students taking a course in general insurance pricing, notably if they are studying to become an actuary through the UK Institute of Actuaries exams. No other title quite like this on the market that is perfect for teaching/study, and is also an excellent guide for practitioners.
Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. During that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in W onham [76J). At the same time, Girsanov [25J and Howard [26J made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4J. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8J, Mine and Osaki [55J, and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.
The primary aims of this book are to provide modern statistical techniques and theory for stochastic processes. The stochastic processes mentioned here are not restricted to the usual AR, MA and ARMA processes. A wide variety of stochastic processes, e.g., non-Gaussian linear processes, long-memory processes, nonlinear processes, non-ergodic processes and diffusion processes are described. The authors discuss the usual estimation and testing theory and also many other statistical methods and techniques, e.g., discriminant analysis, nonparametric methods, semiparametric approaches, higher order asymptotic theory in view of differential geometry, large deviation principle and saddlepoint approximation. Because it is difficult to use the exact distribution theory, the discussion is based on the asymptotic theory. The optimality of various procedures is often shown by use of the local asymptotic normality (LAN) which is due to Le Cam. The LAN gives a unified view for the time series asymptotic theory.
Many recent advances in modelling within the applied sciences and engineering have focused on the increasing importance of sensitivity analyses. For a given physical, financial or environmental model, increased emphasis is now placed on assessing the consequences of changes in model outputs that result from small changes or errors in both the hypotheses and parameters. The approach proposed in this book is entirely new and features two main characteristics. Even when extremely small, errors possess biases and variances. The methods presented here are able, thanks to a specific differential calculus, to provide information about the correlation between errors in different parameters of the model, as well as information about the biases introduced by non-linearity. The approach makes use of very powerful mathematical tools (Dirichlet forms), which allow one to deal with errors in infinite dimensional spaces, such as spaces of functions or stochastic processes. The method is therefore applicable to non-elementary models along the lines of those encountered in modern physics and finance. This text has been drawn from presentations of research done over the past ten years and that is still ongoing. The work was presented in conjunction with a course taught jointly at the Universities of Paris 1 and Paris 6. The book is intended for students, researchers and engineers with good knowledge in probability theory.
Accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Regardless of the software system used, the book describes and gives examples of the use of modern computer software for numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, factorisations, matrix and vector norms, and other topics in linear algebra. The book is essentially self- contained, with the topics addressed constituting the essential material for an introductory course in statistical computing. Numerous exercises allow the text to be used for a first course in statistical computing or as supplementary text for various courses that emphasise computations.
Statistical Applications for Environmental Analysis and Risk Assessment guides readers through real-world situations and the best statistical methods used to determine the nature and extent of the problem, evaluate the potential human health and ecological risks, and design and implement remedial systems as necessary. Featuring numerous worked examples using actual data and ready-made software scripts, Statistical Applications for Environmental Analysis and Risk Assessment also includes: Descriptions of basic statistical concepts and principles in an informal style that does not presume prior familiarity with the subject Detailed illustrations of statistical applications in the environmental and related water resources fields using real-world data in the contexts that would typically be encountered by practitioners Software scripts using the high-powered statistical software system, R, and supplemented by USEPA s ProUCL and USDOE s VSP software packages, which are all freely available Coverage of frequent data sample issues such as non-detects, outliers, skewness, sustained and cyclical trend that habitually plague environmental data samples Clear demonstrations of the crucial, but often overlooked, role of statistics in environmental sampling design and subsequent exposure risk assessment.
If you know a little bit about financial mathematics but don't yet know a lot about programming, then C++ for Financial Mathematics is for you. C++ is an essential skill for many jobs in quantitative finance, but learning it can be a daunting prospect. This book gathers together everything you need to know to price derivatives in C++ without unnecessary complexities or technicalities. It leads the reader step-by-step from programming novice to writing a sophisticated and flexible financial mathematics library. At every step, each new idea is motivated and illustrated with concrete financial examples. As employers understand, there is more to programming than knowing a computer language. As well as covering the core language features of C++, this book teaches the skills needed to write truly high quality software. These include topics such as unit tests, debugging, design patterns and data structures. The book teaches everything you need to know to solve realistic financial problems in C++. It can be used for self-study or as a textbook for an advanced undergraduate or master's level course.
Geometric Data Analysis (GDA) is the name suggested by P. Suppes (Stanford University) to designate the approach to Multivariate Statistics initiated by BenzA(c)cri as Correspondence Analysis, an approach that has become more and more used and appreciated over the years. This book presents the full formalization of GDA in terms of linear algebra - the most original and far-reaching consequential feature of the approach - and shows also how to integrate the standard statistical tools such as Analysis of Variance, including Bayesian methods. Chapter 9, Research Case Studies, is nearly a book in itself; it presents the methodology in action on three extensive applications, one for medicine, one from political science, and one from education (data borrowed from the Stanford computer-based Educational Program for Gifted Youth ). Thus the readership of the book concerns both mathematicians interested in the applications of mathematics, and researchers willing to master an exceptionally powerful approach of statistical data analysis.
The study of scan statistics and their applications to many different scientific and engineering problems have received considerable attention in the literature recently. In addition to challenging theoretical problems, the area of scan statis tics has also found exciting applications in diverse disciplines such as archaeol ogy, astronomy, epidemiology, geography, material science, molecular biology, reconnaissance, reliability and quality control, sociology, and telecommunica tion. This will be clearly evident when one goes through this volume. In this volume, we have brought together a collection of experts working in this area of research in order to review some of the developments that have taken place over the years and also to present their new works and point out some open problems. With this in mind, we selected authors for this volume with some having theoretical interests and others being primarily concerned with applications of scan statistics. Our sincere hope is that this volume will thus provide a comprehensive survey of all the developments in this area of research and hence will serve as a valuable source as well as reference for theoreticians and applied researchers. Graduate students interested in this area will find this volume to be particularly useful as it points out many open challenging problems that they could pursue. This volume will also be appropriate for teaching a graduate-level special course on this topic."
Modern statistics consists of methods which help in drawing inferences about the population under consideration. These populations may actually exist, or could be generated by repeated. experimentation. The medium of drawing inferences about the population is the sample, which is a subset of measurements selected from the population. Each measurement in the sample is used for making inferences about the population. The populations and also the methods of sample selection differ from one field of science to the other. Social scientists use surveys tocollectthe sample information, whereas the physical scientists employ the method of experimentation for obtaining this information. This is because in social sciences the factors that cause variation in the measurements on the study variable for the population units can not be controlled, whereas in physical sciences these factors can be controlled, at least to some extent, through proper experimental design. Several excellent books on sampling theory are available in the market. These books discuss the theory of sample surveys in great depth and detail, and are suited to the postgraduate students majoring in statistics. Research workers in the field of sampling methodology can also make use of these books. However, not many suitable books are available, which can be used by the students and researchers in the fields of economics, social sciences, extension education, agriculture, medical sciences, business management, etc. These students and workers usually conduct sample surveys during their research projects."
One of the main difficulties of applying an evolutionary algorithm
(or, as a matter of fact, any heuristic method) to a given problem
is to decide on an appropriate set of parameter values. Typically
these are specified before the algorithm is run and include
population size, selection rate, operator probabilities, not to
mention the representation and the operators themselves. This book
gives the reader a solid perspective on the different approaches
that have been proposed to automate control of these parameters as
well as understanding their interactions. The book covers a broad
area of evolutionary computation, including genetic algorithms,
evolution strategies, genetic programming, estimation of
distribution algorithms, and also discusses the issues of specific
parameters used in parallel implementations, multi-objective
evolutionary algorithms, and practical consideration for real-world
applications. It is a recommended read for researchers and
practitioners of evolutionary computation and heuristic
methods.
This book is intended as a text for graduate students and as a reference for workers in probability and statistics. The prerequisite is honest calculus. The material covered in Parts Two to Five inclusive requires about three to four semesters of graduate study. The introductory part may serve as a text for an undergraduate course in elementary probability theory. Numerous historical marks about results, methods, and the evolution of various fields are an intrinsic part of the text. About a third of the second volume is devoted to conditioning and properties of sequences of various types of dependence. The other two thirds are devoted to random functions; the last Part on Elements of random analysis is more sophisticated.
In the field of molecular evolution, inferences about past evolutionary events are made using molecular data from currently living species. With the availability of genomic data from multiple related species, molecular evolution has become one of the most active and fastest growing fields of study in genomics and bioinformatics. Most studies in molecular evolution rely heavily on statistical procedures based on stochastic process modelling and advanced computational methods including high-dimensional numerical optimization and Markov Chain Monte Carlo. This book provides an overview of the statistical theory and methods used in studies of molecular evolution. It includes an introductory section suitable for readers that are new to the field, a section discussing practical methods for data analysis, and more specialized sections discussing specific models and addressing statistical issues relating to estimation and model choice. The chapters are written by the leaders of field and they will take the reader from basic introductory material to the state-of-the-art statistical methods. This book is suitable for statisticians seeking to learn more about applications in molecular evolution and molecular evolutionary biologists with an interest in learning more about the theory behind the statistical methods applied in the field. The chapters of the book assume no advanced mathematical skills beyond basic calculus, although familiarity with basic probability theory will help the reader. Most relevant statistical concepts are introduced in the book in the context of their application in molecular evolution, and the book should be accessible for most biology graduate students with an interest in quantitative methods and theory. Rasmus Nielsen received his Ph.D. form the University of California at Berkeley in 1998 and after a postdoc at Harvard University, he assumed a faculty position in Statistical Genomics at Cornell University. He is currently an Ole Romer Fellow at the University of Copenhagen and holds a Sloan Research Fellowship. His is an associate editor of the Journal of Molecular Evolution and has published more than fifty original papers in peer-reviewed journals on the topic of this book. From the reviews: ..".Overall this is a very useful book in an area of increasing importance." Journal of the Royal Statistical Society "I find Statistical Methods in Molecular Evolution very interesting and useful. It delves into problems that were considered very difficult just several years ago...the book is likely to stimulate the interest of statisticians that are unaware of this exciting field of applications. It is my hope that it will also help the 'wet lab' molecular evolutionist to better understand mathematical and statistical methods." Marek Kimmel for the Journal of the American Statistical Association, September 2006 "Who should read this book? We suggest that anyone who deals with molecular data (who does not?) and anyone who asks evolutionary questions (who should not?) ought to consult the relevant chapters in this book." Dan Graur and Dror Berel for Biometrics, September 2006 "Coalescence theory facilitates the merger of population genetics theory with phylogenetic approaches, but still, there are mostly two camps: phylogeneticists and population geneticists. Only a few people are moving freely between them. Rasmus Nielsen is certainly one of these researchers, and his work so far has merged many population genetic and phylogenetic aspects of biological research under the umbrella of molecular evolution. Although Nielsen did not contribute a chapter to his book, his work permeates all its chapters. This book gives an overview of his interests and current achievements in molecular evolution. In short, this book should be on your bookshelf." Peter Beerli for Evolution, 60(2), 2006"
Improving the quality of products and manufacturing processes at low cost is an economic and technological challenge to industrial engineers and managers alike. In today's business world, the implementation of experimental design techniques often falls short of the mark due to a lack of statistical knowledge on the part of engineers and managers in their analyses of manufacturing process quality problems. This timely book aims to fill this gap in the statistical knowledge required by engineers to solve manufacturing quality problems by using Taguchi experimental design methodology. The book increases awareness of strategic methodology through real-life case studies, providing valuable information for both academics and professionals with no prior knowledge of the theory of probability and statistics. Experimental Quality: Provides a unique framework to help engineers and managers address quality problems and use strategic design methodology. Offers detailed case studies illustrating the implementation of experimental design theory. Is easily accessible without prior knowledge or understanding of probability and statistics. This book provides an excellent resource for both academic and industrial environments, and will prove invaluable to practising industrial engineers, quality engineers and engineering managers from all disciplines.
Nonlinear statistical modelling is an area of growing importance. This monograph presents mostly new results and methods concerning the nonlinear regression model. Among the aspects which are considered are linear properties of nonlinear models, multivariate nonlinear regression, intrinsic and parameter effect curvature, algorithms for calculating the L2-estimator and both local and global approximation. In addition to this a chapter has been added on the large topic of nonlinear exponential families. The volume will be of interest to both experts in the field of nonlinear statistical modelling and to those working in the identification of models and optimization, as well as to statisticians in general.
This book is targeted to biologists with limited statistical background and to statisticians and computer scientists interested in being effective collaborators on multi-disciplinary DNA microarray projects. State-of-the-art analysis methods are presented with minimal mathematical notation and a focus on concepts. This book is unique because it is authored by statisticians at the National Cancer Institute who are actively involved in the application of microarray technology. Many laboratories are not equipped to effectively design and analyze studies that take advantage of the promise of microarrays. Many of the software packages available to biologists were developed without involvement of statisticians experienced in such studies and contain tools that may not be optimal for particular applications. This book provides a sound preparation for designing microarray studies that have clear objectives, and for selecting analysis tools and strategies that provide clear and valid answers. The book offers an in depth understanding of the design and analysis of experiments utilizing microarrays and should benefit scientists regardless of what software packages they prefer. In order to provide all readers with hands on experience in data analysis, it includes an Appendix tutorial on the use of BRB-ArrayTools and step by step analyses of several major datasets using this software which is freely available from the National Cancer Institute for non-commercial use. The authors are current or former members of the Biometric Research Branch at the National Cancer Institute. They have collaborated on major biomedical studies utilizing microarrays and in the development of statistical methodology for the design and analysis of microarray investigations. Dr. Simon, chief of the branch, is also the architect of BRB-ArrayTools. |
You may like...
Formal Methods: State of the Art and New…
Paul Boca, Jonathan P. Bowen, …
Hardcover
R2,810
Discovery Miles 28 100
Mathematics and Computing - ICMC 2018…
Debdas Ghosh, Debasis Giri, …
Hardcover
R2,732
Discovery Miles 27 320
A Journey Through Discrete Mathematics…
Martin Loebl, Jaroslav Nesetril, …
Hardcover
R4,413
Discovery Miles 44 130
Industrial Deployment of System…
Alexander Romanovsky, Martyn Thomas
Hardcover
Algorithm Design: A Methodological…
Patrick Bosc, Marc Guyomard, …
Hardcover
R3,731
Discovery Miles 37 310
Recent Trends in Computational…
Miriam Mehl, Manfred Bischoff, …
Hardcover
|