![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Study smarter and stay on top of your probability course with the bestselling Schaum's Outline-now with the NEW Schaum's app and website! Schaum's Outline of Probability, Third Edition is the go-to study guide for help in probability courses. It's ideal for undergrads, graduate students and professionals needing a tool for review. With an outline format that facilitates quick and easy review and mirrors the course in scope and sequence, this book helps you understand basic concepts and get the extra practice you need to excel in the course. Schaum's Outline of Probability, Third Edition supports the bestselling textbooks and is useful for a variety of classes, including Elementary Probability and Statistics, Data Analysis, Finite Mathematics, and many other courses. You'll find coverage on finite and countable sets, binomial coefficients, axioms of probability, conditional probability, expectation of a finite random variable, Poisson distribution, and probability of vectors and Stochastic matrices. Also included: finite Stochastic and tree diagrams, Chebyshev's inequality and the law of large numbers, calculations of binomial probabilities using normal approximation, and regular Markov processes and stationary state distributions. Features *NEW to this edition: the new Schaum's app and website! *NEW to this edition: 20 NEW problem-solving videos online *430 solved problems *Outline format to provide a concise guide to the standard college course in probability *Clear, concise explanations of probability concepts *Supports these major texts: Elementary Statistics: A Step by Step Approach (Bluman), Mathematics with Applications (Hungerford), and Discrete Mathematics and Its Applications (Rosen) *Appropriate for the following courses: Elementary Probability and Statistics, Data Analysis, Finite Mathematics, Introduction to Mathematical Statistics, Mathematics for Biological Sciences, Introductory Statistics, Discrete Mathematics, Probability for Applied Science, and Introduction to Probability Theory
This monograph surveys the theory of quantitative homogenization for second-order linear elliptic systems in divergence form with rapidly oscillating periodic coefficients in a bounded domain. It begins with a review of the classical qualitative homogenization theory, and addresses the problem of convergence rates of solutions. The main body of the monograph investigates various interior and boundary regularity estimates that are uniform in the small parameter e>0. Additional topics include convergence rates for Dirichlet eigenvalues and asymptotic expansions of fundamental solutions, Green functions, and Neumann functions. The monograph is intended for advanced graduate students and researchers in the general areas of analysis and partial differential equations. It provides the reader with a clear and concise exposition of an important and currently active area of quantitative homogenization.
Generalising classical concepts of probability theory, the investigation of operator (semi)-stable laws as possible limit distributions of operator-normalized sums of i.i.d. random variable on finite-dimensional vector space started in 1969. Currently, this theory is still in progress and promises interesting applications. Parallel to this, similar stability concepts for probabilities on groups were developed during recent decades. It turns out that the existence of suitable limit distributions has a strong impact on the structure of both the normalizing automorphisms and the underlying group. Indeed, investigations in limit laws led to contractable groups and - at least within the class of connected groups - to homogeneous groups, in particular to groups that are topologically isomorphic to a vector space. Moreover, it has been shown that (semi)-stable measures on groups have a vector space counterpart and vice versa. The purpose of this book is to describe the structure of limit laws and the limit behaviour of normalized i.i.d. random variables on groups and on finite-dimensional vector spaces from a common point of view. This will also shed a new light on the classical situation. Chapter 1 provides an introduction to stability problems on vector spaces. Chapter II is concerned with parallel investigations for homogeneous groups and in Chapter III the situation beyond homogeneous Lie groups is treated. Throughout, emphasis is laid on the description of features common to the group- and vector space situation. Chapter I can be understood by graduate students with some background knowledge in infinite divisibility. Readers of Chapters II and III are assumed to be familiar with basic techniques from probability theory on locally compact groups.
Showcases the excellent data science environment in Python. Provides examples for readers to replicate, adapt, extend, and improve. Covers the crucial knowledge needed by geographic data scientist.
This IMA Volume in Mathematics and its Applications RANDOM SETS: THEORY AND APPLICATIONS is based on the proceedings of a very successful 1996 three-day Summer Program on "Application and Theory of Random Sets." We would like to thank the scientific organizers: John Goutsias (Johns Hopkins University), Ronald P.S. Mahler (Lockheed Martin), and Hung T. Nguyen (New Mexico State University) for their excellent work as organizers of the meeting and for editing the proceedings. We also take this opportunity to thank the Army Research Office (ARO), the Office ofNaval Research (0NR), and the Eagan, MinnesotaEngineering Center ofLockheed Martin Tactical Defense Systems, whose financial support made the summer program possible. Avner Friedman Robert Gulliver v PREFACE "Later generations will regard set theory as a disease from which one has recovered. " - Henri Poincare Random set theory was independently conceived by D.G. Kendall and G. Matheron in connection with stochastic geometry. It was however G.
Scientists and engineers often have to deal with systems that exhibit random or unpredictable elements and must effectively evaluate probabilities in each situation. Computer simulations, while the traditional tool used to solve such problems, are limited in the scale and complexity of the problems they can solve. Formalized Probability Theory and Applications Using Theorem Proving discusses some of the limitations inherent in computer systems when applied to problems of probabilistic analysis, and presents a novel solution to these limitations, combining higher-order logic with computer-based theorem proving. Combining practical application with theoretical discussion, this book is an important reference tool for mathematicians, scientists, engineers, and researchers in all STEM fields.
Primary Audience for the Book * Specialists in numerical computations who are interested in algorithms with automatic result verification. * Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. * Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the operands range over the domain. For example, [0. 9,1. 1] + [2. 9,3. 1] = [3. 8,4. 2], where [3. 8,4. 2] = {x + ylx E [0. 9,1. 1] and y E [3. 8,4. 2]}. The power of interval arithmetic comes from the fact that (i) the elementary operations and standard functions can be computed for intervals with formulas and subroutines; and (ii) directed roundings can be used, so that the images of these operations (e. g.
In applications, and especially in mathematical finance, random
time-dependent events are often modeled as stochastic processes.
Assumptions are made about the structure of such processes, and
serious researchers will want to justify those assumptions through
the use of data. As statisticians are wont to say, "In God we
trust; all others must bring data."
The book equips students with the end-to-end skills needed to do data science. That means gathering, cleaning, preparing, and sharing data, then using statistical models to analyse data, writing about the results of those models, drawing conclusions from them, and finally, using the cloud to put a model into production, all done in a reproducible way. At the moment, there are a lot of books that teach data science, but most of them assume that you already have the data. This book fills that gap by detailing how to go about gathering datasets, cleaning and preparing them, before analysing them. There are also a lot of books that teach statistical modelling, but few of them teach how to communicate the results of the models and how they help us learn about the world. Very few data science textbooks cover ethics, and most of those that do, have a token ethics chapter. Finally, reproducibility is not often emphasised in data science books. This book is based around a straight-forward workflow conducted in an ethical and reproducible way: gather data, prepare data, analyse data, and communicate those findings. This book will achieve the goals by working through extensive case studies in terms of gathering and preparing data, and integrating ethics throughout. It is specifically designed around teaching how to write about the data and models, so aspects such as writing are explicitly covered. And finally, the use of GitHub and the open-source statistical language R are built in throughout the book. Key Features: Extensive code examples. Ethics integrated throughout. Reproducibility integrated throughout. Focus on data gathering, messy data, and cleaning data. Extensive formative assessment throughout.
The aim of this graduate textbook is to provide a comprehensive advanced course in the theory of statistics covering those topics in estimation, testing, and large sample theory which a graduate student might typically need to learn as preparation for work on a Ph.D. An important strength of this book is that it provides a mathematically rigorous and even-handed account of both Classical and Bayesian inference in order to give readers a broad perspective. For example, the "uniformly most powerful" approach to testing is contrasted with available decision-theoretic approaches.
Chances Are is the first book to make statistics accessible to everyone, regardless of how much math you remember from school. Do percentages confuse you? Can you tell the difference among a mean, median, and mode? Steve Slavin can help With Chances Are, you can actually teach yourself all the statistics you will ever need.
Handbook of Alternative Data in Finance, Volume I motivates and challenges the reader to explore and apply Alternative Data in finance. The book provides a robust and in-depth overview of Alternative Data, including its definition, characteristics, difference from conventional data, categories of Alternative Data, Alternative Data providers, and more. The book also offers a rigorous and detailed exploration of process, application and delivery that should be practically useful to researchers and practitioners alike. Features Includes cutting edge applications in machine learning, fintech, and more Suitable for professional quantitative analysts, and as a resource for postgraduates and researchers in financial mathematics Features chapters from many leading researchers and practitioners.
Machine learning is a novel discipline concerned with the analysis of large and multiple variables data. It involves computationally intensive methods, like factor analysis, cluster analysis, and discriminant analysis. It is currently mainly the domain of computer scientists, and is already commonly used in social sciences, marketing research, operational research and applied sciences. It is virtually unused in clinical research. This is probably due to the traditional belief of clinicians in clinical trials where multiple variables are equally balanced by the randomization process and are not further taken into account. In contrast, modern computer data files often involve hundreds of variables like genes and other laboratory values, and computationally intensive methods are required. This book was written as a hand-hold presentation accessible to clinicians, and as a must-read publication for those new to the methods.
This book gives a self-contained introduction to the dynamic martingale approach to marked point processes (MPP). Based on the notion of a compensator, this approach gives a versatile tool for analyzing and describing the stochastic properties of an MPP. In particular, the authors discuss the relationship of an MPP to its compensator and particular classes of MPP are studied in great detail. The theory is applied to study properties of dependent marking and thinning, to prove results on absolute continuity of point process distributions, to establish sufficient conditions for stochastic ordering between point and jump processes, and to solve the filtering problem for certain classes of MPPs.
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
This monograph provides, for the first time, a most comprehensive statistical account of composite sampling as an ingenious environmental sampling method to help accomplish observational economy in a variety of environmental and ecological studies. Sampling consists of selection, acquisition, and quantification of a part of the population. But often what is desirable is not affordable, and what is affordable is not adequate. How do we deal with this dilemma? Operationally, composite sampling recognizes the distinction between selection, acquisition, and quantification. In certain applications, it is a common experience that the costs of selection and acquisition are not very high, but the cost of quantification, or measurement, is substantially high. In such situations, one may select a sample sufficiently large to satisfy the requirement of representativeness and precision and then, by combining several sampling units into composites, reduce the cost of measurement to an affordable level. Thus composite sampling offers an approach to deal with the classical dilemma of desirable versus affordable sample sizes, when conventional statistical methods fail to resolve the problem. Composite sampling, at least under idealized conditions, incurs no loss of information for estimating the population means. But an important limitation to the method has been the loss of information on individual sample values, such as the extremely large value. In many of the situations where individual sample values are of interest or concern, composite sampling methods can be suitably modified to retrieve the information on individual sample values that may be lost due to compositing. In this monograph, we present statistical solutions to these and other issues that arise in the context of applications of composite sampling. Content Level Research
Bayesian nonparametrics has grown tremendously in the last three decades, especially in the last few years. This book is the first systematic treatment of Bayesian nonparametric methods and the theory behind them. While the book is of special interest to Bayesians, it will also appeal to statisticians in general because Bayesian nonparametrics offers a whole continuous spectrum of robust alternatives to purely parametric and purely nonparametric methods of classical statistics. The book is primarily aimed at graduate students and can be used as the text for a graduate course in Bayesian nonparametrics. Though the emphasis of the book is on nonparametrics, there is a substantial chapter on asymptotics of classical Bayesian parametric models. Jayanta Ghosh has been Director and Jawaharlal Nehru Professor at the Indian Statistical Institute and President of the International Statistical Institute. He is currently professor of statistics at Purdue University. He has been editor of Sankhya and served on the editorial boards of several journals including the Annals of Statistics. Apart from Bayesian analysis, his interests include asymptotics, stochastic modeling, high dimensional model selection, reliability and survival analysis and bioinformatics. R.V. Ramamoorthi is professor at the Department of Statistics and Probability at Michigan State University. He has published papers in the areas of sufficiency invariance, comparison of experiments, nonparametric survival analysis and Bayesian analysis. In addition to Bayesian nonparametrics, he is currently interested in Bayesian networks and graphical models. He is on the editorial board of Sankhya.
Over the last fifteen years fractal geometry has established itself as a substantial mathematical theory in its own right. The interplay between fractal geometry, analysis and stochastics has highly influenced recent developments in mathematical modeling of complicated structures. This process has been forced by problems in these areas related to applications in statistical physics, biomathematics and finance. This book is a collection of survey articles covering many of the most recent developments, like Schramm-Loewner evolution, fractal scaling limits, exceptional sets for percolation, and heat kernels on fractals. The authors were the keynote speakers at the conference "Fractal Geometry and Stochastics IV" at Greifswald in September 2008.
A "health disparity" refers to a higher burden of illness, injury, disability, or mortality experienced by one group relative to another. These disparities may be due to many factors including age, income, race, etc. This book will focus on their estimation, ranging from classical approaches including the quantification of a disparity, to more formal modelling, to modern approaches involving more flexible computational approaches. Features: * Presents an overview of methods and applications of health disparity estimation * First book to synthesize research in this field in a unified statistical framework * Covers classical approaches, and builds to more modern computational techniques * Includes many worked examples and case studies using real data * Discusses available software for estimation The book is designed primarily for researchers and graduate students in biostatistics, data science, and computer science. It will also be useful to many quantitative modelers in genetics, biology, sociology, and epidemiology.
For junior/senior undergraduates taking probability and statistics as applied to engineering, science, or computer science. This classic text provides a rigorous introduction to basic probability theory and statistical inference, with a unique balance between theory and methodology. Interesting, relevant applications use real data from actual studies, showing how the concepts and methods can be used to solve problems in the field. This revision focuses on improved clarity and deeper understanding.
Provides a comprehensive and accessible introduction to general insurance pricing, based on the author’s many years of experience as both a teacher and practitioner. Suitable for students taking a course in general insurance pricing, notably if they are studying to become an actuary through the UK Institute of Actuaries exams. No other title quite like this on the market that is perfect for teaching/study, and is also an excellent guide for practitioners.
Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. During that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in W onham [76J). At the same time, Girsanov [25J and Howard [26J made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4J. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8J, Mine and Osaki [55J, and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.
|
You may like...
DRUG ACTION HAEMODYNAMICS AND IMMUNE…
M.J. Parnham, Jacques Bruinvels, …
Hardcover
R3,959
Discovery Miles 39 590
|