![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
For surveys involving sensitive questions, randomized response techniques (RRTs) and other indirect questions are helpful in obtaining survey responses while maintaining the privacy of the respondents. Written by one of the leading experts in the world on RR, Randomized Response and Indirect Questioning Techniques in Surveys describes the current state of RR as well as emerging developments in the field. The author also explains how to extend RR to situations employing unequal probability sampling. While the theory of RR has grown phenomenally, the area has not kept pace in practice. Covering both theory and practice, the book first discusses replacing a direct response (DR) with an RR in a simple random sample with replacement (SRSWR). It then emphasizes how the application of RRTs in the estimation of attribute or quantitative features is valid for selecting respondents in a general manner. The author examines different ways to treat maximum likelihood estimation; covers optional RR devices, which provide alternatives to compulsory randomized response theory; and presents RR techniques that encompass quantitative variables, including those related to stigmatizing characteristics. He also gives his viewpoint on alternative RR techniques, including the item count technique, nominative technique, and three-card method.
It is increasingly common for analysts to seek out the opinions of individuals and organizations using attitudinal scales such as degree of satisfaction or importance attached to an issue. Examples include levels of obesity, seriousness of a health condition, attitudes towards service levels, opinions on products, voting intentions, and the degree of clarity of contracts. Ordered choice models provide a relevant methodology for capturing the sources of influence that explain the choice made amongst a set of ordered alternatives. The methods have evolved to a level of sophistication that can allow for heterogeneity in the threshold parameters, in the explanatory variables (through random parameters), and in the decomposition of the residual variance. This book brings together contributions in ordered choice modeling from a number of disciplines, synthesizing developments over the last fifty years, and suggests useful extensions to account for the wide range of sources of influence on choice.
'Et moi, ..., si. j'avail su comment en revenir. One service mathematics has rendered be human race. It has put common sense back jc n'y scrais point a1U: where it belongs, on the topmost sbelf next Jules Verne to \be dusty canister labelled 'discarded non- TIle series is divergent; therefore we may be sense'. able to do something with it Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic bas rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."
This book is devoted to the study of univariate distributions appropriate for the analyses of data known to be nonnegative. The book includes much material from reliability theory in engineering and survival analysis in medicine.
First published in 2000. Routledge is an imprint of Taylor & Francis, an informa company.
The aim of this book is to provide a strong theoretical support for understanding and analyzing the behavior of evolutionary algorithms, as well as for creating a bridge between probability, set-oriented numerics and evolutionary computation. The volume encloses a collection of contributions that were presented at the EVOLVE 2011 international workshop, held in Luxembourg, May 25-27, 2011, coming from invited speakers and also from selected regular submissions. The aim of EVOLVE is to unify the perspectives offered by probability, set oriented numerics and evolutionary computation. EVOLVE focuses on challenging aspects that arise at the passage from theory to new paradigms and practice, elaborating on the foundations of evolutionary algorithms and theory-inspired methods merged with cutting-edge techniques that ensure performance guarantee factors. EVOLVE is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. The chapters enclose challenging theoretical findings, concrete optimization problems as well as new perspectives. By gathering contributions from researchers with different backgrounds, the book is expected to set the basis for a unified view and vocabulary where theoretical advancements may echo in different domains.
This book provides a systematic in-depth analysis of nonparametric regression with random design. It covers almost all known estimates such as classical local averaging estimates including kernel, partitioning and nearest neighbor estimates, least squares estimates using splines, neural networks and radial basis function networks, penalized least squares estimates, local polynomial kernel estimates, and orthogonal series estimates. The emphasis is on distribution-free properties of the estimates. Most consistency results are valid for all distributions of the data. Whenever it is not possible to derive distribution-free results, as in the case of the rates of convergence, the emphasis is on results which require as few constrains on distributions as possible, on distribution-free inequalities, and on adaptation. The relevant mathematical theory is systematically developed and requires only a basic knowledge of probability theory. The book will be a valuable reference for anyone interested in nonparametric regression and is a rich source of many useful mathematical techniques widely scattered in the literature. In particular, the book introduces the reader to empirical process theory, martingales and approximation properties of neural networks.
"Statistical Modeling, Analysis and Management of Fuzzy Data," or SMFD for short, is an important contribution to a better understanding of a basic issue -an issue which has been controversial, and still is though to a lesser degree. In substance, the issue is: are fuzziness and randomness distinct or coextensive facets of uncertainty? Are the theories of fuzziness and random ness competitive or complementary? In SMFD, these and related issues are addressed with rigor, authority and insight by prominent contributors drawn, in the main, from probability theory, fuzzy set theory and data analysis com munities. First, a historical perspective. The almost simultaneous births -close to half a century ago-of statistically-based information theory and cybernetics were two major events which marked the beginning of the steep ascent of probability theory and statistics in visibility, influence and importance. I was a student when information theory and cybernetics were born, and what is etched in my memory are the fascinating lectures by Shannon and Wiener in which they sketched their visions of the coming era of machine intelligence and automation of reasoning and decision processes. What I heard in those lectures inspired one of my first papers (1950) "An Extension of Wiener's Theory of Prediction," and led to my life-long interest in probability theory and its applications to information processing, decision analysis and control."
In applications, and especially in mathematical finance, random
time-dependent events are often modeled as stochastic processes.
Assumptions are made about the structure of such processes, and
serious researchers will want to justify those assumptions through
the use of data. As statisticians are wont to say, "In God we
trust; all others must bring data."
Scientists and engineers often have to deal with systems that exhibit random or unpredictable elements and must effectively evaluate probabilities in each situation. Computer simulations, while the traditional tool used to solve such problems, are limited in the scale and complexity of the problems they can solve. Formalized Probability Theory and Applications Using Theorem Proving discusses some of the limitations inherent in computer systems when applied to problems of probabilistic analysis, and presents a novel solution to these limitations, combining higher-order logic with computer-based theorem proving. Combining practical application with theoretical discussion, this book is an important reference tool for mathematicians, scientists, engineers, and researchers in all STEM fields.
This IMA Volume in Mathematics and its Applications RANDOM SETS: THEORY AND APPLICATIONS is based on the proceedings of a very successful 1996 three-day Summer Program on "Application and Theory of Random Sets." We would like to thank the scientific organizers: John Goutsias (Johns Hopkins University), Ronald P.S. Mahler (Lockheed Martin), and Hung T. Nguyen (New Mexico State University) for their excellent work as organizers of the meeting and for editing the proceedings. We also take this opportunity to thank the Army Research Office (ARO), the Office ofNaval Research (0NR), and the Eagan, MinnesotaEngineering Center ofLockheed Martin Tactical Defense Systems, whose financial support made the summer program possible. Avner Friedman Robert Gulliver v PREFACE "Later generations will regard set theory as a disease from which one has recovered. " - Henri Poincare Random set theory was independently conceived by D.G. Kendall and G. Matheron in connection with stochastic geometry. It was however G.
This monograph surveys the theory of quantitative homogenization for second-order linear elliptic systems in divergence form with rapidly oscillating periodic coefficients in a bounded domain. It begins with a review of the classical qualitative homogenization theory, and addresses the problem of convergence rates of solutions. The main body of the monograph investigates various interior and boundary regularity estimates that are uniform in the small parameter e>0. Additional topics include convergence rates for Dirichlet eigenvalues and asymptotic expansions of fundamental solutions, Green functions, and Neumann functions. The monograph is intended for advanced graduate students and researchers in the general areas of analysis and partial differential equations. It provides the reader with a clear and concise exposition of an important and currently active area of quantitative homogenization.
Showcases the excellent data science environment in Python. Provides examples for readers to replicate, adapt, extend, and improve. Covers the crucial knowledge needed by geographic data scientist.
Generalising classical concepts of probability theory, the investigation of operator (semi)-stable laws as possible limit distributions of operator-normalized sums of i.i.d. random variable on finite-dimensional vector space started in 1969. Currently, this theory is still in progress and promises interesting applications. Parallel to this, similar stability concepts for probabilities on groups were developed during recent decades. It turns out that the existence of suitable limit distributions has a strong impact on the structure of both the normalizing automorphisms and the underlying group. Indeed, investigations in limit laws led to contractable groups and - at least within the class of connected groups - to homogeneous groups, in particular to groups that are topologically isomorphic to a vector space. Moreover, it has been shown that (semi)-stable measures on groups have a vector space counterpart and vice versa. The purpose of this book is to describe the structure of limit laws and the limit behaviour of normalized i.i.d. random variables on groups and on finite-dimensional vector spaces from a common point of view. This will also shed a new light on the classical situation. Chapter 1 provides an introduction to stability problems on vector spaces. Chapter II is concerned with parallel investigations for homogeneous groups and in Chapter III the situation beyond homogeneous Lie groups is treated. Throughout, emphasis is laid on the description of features common to the group- and vector space situation. Chapter I can be understood by graduate students with some background knowledge in infinite divisibility. Readers of Chapters II and III are assumed to be familiar with basic techniques from probability theory on locally compact groups.
This book gives a self-contained introduction to the dynamic martingale approach to marked point processes (MPP). Based on the notion of a compensator, this approach gives a versatile tool for analyzing and describing the stochastic properties of an MPP. In particular, the authors discuss the relationship of an MPP to its compensator and particular classes of MPP are studied in great detail. The theory is applied to study properties of dependent marking and thinning, to prove results on absolute continuity of point process distributions, to establish sufficient conditions for stochastic ordering between point and jump processes, and to solve the filtering problem for certain classes of MPPs.
Chances Are is the first book to make statistics accessible to everyone, regardless of how much math you remember from school. Do percentages confuse you? Can you tell the difference among a mean, median, and mode? Steve Slavin can help With Chances Are, you can actually teach yourself all the statistics you will ever need.
Primary Audience for the Book * Specialists in numerical computations who are interested in algorithms with automatic result verification. * Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. * Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the operands range over the domain. For example, [0. 9,1. 1] + [2. 9,3. 1] = [3. 8,4. 2], where [3. 8,4. 2] = {x + ylx E [0. 9,1. 1] and y E [3. 8,4. 2]}. The power of interval arithmetic comes from the fact that (i) the elementary operations and standard functions can be computed for intervals with formulas and subroutines; and (ii) directed roundings can be used, so that the images of these operations (e. g.
The aim of this graduate textbook is to provide a comprehensive advanced course in the theory of statistics covering those topics in estimation, testing, and large sample theory which a graduate student might typically need to learn as preparation for work on a Ph.D. An important strength of this book is that it provides a mathematically rigorous and even-handed account of both Classical and Bayesian inference in order to give readers a broad perspective. For example, the "uniformly most powerful" approach to testing is contrasted with available decision-theoretic approaches.
Machine learning is a novel discipline concerned with the analysis of large and multiple variables data. It involves computationally intensive methods, like factor analysis, cluster analysis, and discriminant analysis. It is currently mainly the domain of computer scientists, and is already commonly used in social sciences, marketing research, operational research and applied sciences. It is virtually unused in clinical research. This is probably due to the traditional belief of clinicians in clinical trials where multiple variables are equally balanced by the randomization process and are not further taken into account. In contrast, modern computer data files often involve hundreds of variables like genes and other laboratory values, and computationally intensive methods are required. This book was written as a hand-hold presentation accessible to clinicians, and as a must-read publication for those new to the methods.
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
Recent developments show that probability methods have become a very powerful tool in such different areas as statistical physics, dynamical systems, Riemannian geometry, group theory, harmonic analysis, graph theory and computer science. This volume is an outcome of the special semester 2001 - Random Walks held at the Schroedinger Institute in Vienna, Austria. It contains original research articles with non-trivial new approaches based on applications of random walks and similar processes to Lie groups, geometric flows, physical models on infinite graphs, random number generators, Lyapunov exponents, geometric group theory, spectral theory of graphs and potential theory. Highlights are the first survey of the theory of the stochastic Loewner evolution and its applications to percolation theory (a new rapidly developing and very promising subject at the crossroads of probability, statistical physics and harmonic analysis), surveys on expander graphs, random matrices and quantum chaos, cellular automata and symbolic dynamical systems, and others. The contributors to the volume are the leading experts in the area. The book will provide a valuable source both for active researchers and graduate students in the respective fields.
Study smarter and stay on top of your probability course with the bestselling Schaum's Outline-now with the NEW Schaum's app and website! Schaum's Outline of Probability, Third Edition is the go-to study guide for help in probability courses. It's ideal for undergrads, graduate students and professionals needing a tool for review. With an outline format that facilitates quick and easy review and mirrors the course in scope and sequence, this book helps you understand basic concepts and get the extra practice you need to excel in the course. Schaum's Outline of Probability, Third Edition supports the bestselling textbooks and is useful for a variety of classes, including Elementary Probability and Statistics, Data Analysis, Finite Mathematics, and many other courses. You'll find coverage on finite and countable sets, binomial coefficients, axioms of probability, conditional probability, expectation of a finite random variable, Poisson distribution, and probability of vectors and Stochastic matrices. Also included: finite Stochastic and tree diagrams, Chebyshev's inequality and the law of large numbers, calculations of binomial probabilities using normal approximation, and regular Markov processes and stationary state distributions. Features *NEW to this edition: the new Schaum's app and website! *NEW to this edition: 20 NEW problem-solving videos online *430 solved problems *Outline format to provide a concise guide to the standard college course in probability *Clear, concise explanations of probability concepts *Supports these major texts: Elementary Statistics: A Step by Step Approach (Bluman), Mathematics with Applications (Hungerford), and Discrete Mathematics and Its Applications (Rosen) *Appropriate for the following courses: Elementary Probability and Statistics, Data Analysis, Finite Mathematics, Introduction to Mathematical Statistics, Mathematics for Biological Sciences, Introductory Statistics, Discrete Mathematics, Probability for Applied Science, and Introduction to Probability Theory
Over the last fifteen years fractal geometry has established itself as a substantial mathematical theory in its own right. The interplay between fractal geometry, analysis and stochastics has highly influenced recent developments in mathematical modeling of complicated structures. This process has been forced by problems in these areas related to applications in statistical physics, biomathematics and finance. This book is a collection of survey articles covering many of the most recent developments, like Schramm-Loewner evolution, fractal scaling limits, exceptional sets for percolation, and heat kernels on fractals. The authors were the keynote speakers at the conference "Fractal Geometry and Stochastics IV" at Greifswald in September 2008. |
You may like...
New Developments in Nanosensors for…
Sibel A Ozkan, Afzal Shah
Paperback
Natural Products Chemistry of Botanical…
Xavier Siwe-Noundou
Paperback
R1,549
Discovery Miles 15 490
Fullerens, Graphenes and Nanotubes - A…
Alexandru Mihai Grumezescu
Paperback
|