![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This is a book about regression analysis, that is, the situation in statistics where the distribution of a response (or outcome) variable is related to - planatory variables (or covariates). This is an extremely common situation in the application of statistical methods in many ?elds, andlinear regression, - gistic regression, and Cox proportional hazards regression are frequently used for quantitative, binary, and survival time outcome variables, respectively. Several books on these topics have appeared and for that reason one may well ask why we embark on writing still another book on regression. We have two main reasons for doing this: 1. First, we want to highlightsimilaritiesamonglinear, logistic, proportional hazards, andotherregressionmodelsthatincludealinearpredictor. These modelsareoftentreatedentirelyseparatelyintextsinspiteofthefactthat alloperationsonthemodelsdealingwiththelinearpredictorareprecisely the same, including handling of categorical and quantitative covariates, testing for linearity and studying interactions. 2. Second, we want to emphasize that, for any type of outcome variable, multiple regression models are composed of simple building blocks that areaddedtogetherinthelinearpredictor: thatis, t-tests, one-wayanalyses of variance and simple linear regressions for quantitative outcomes, 2x2, 2x(k+1) tables and simple logistic regressions for binary outcomes, and 2-and (k+1)-sample logrank testsand simple Cox regressionsfor survival data. Thishastwoconsequences. Allthesesimpleandwellknownmethods can be considered as special cases of the regression models. On the other hand, the e?ect of a single explanatory variable in a multiple regression model can be interpreted in a way similar to that obtained in the simple analysis, however, now valid only for the other explanatory variables in the model "held ?xed.""
This graduate textbook covers topics in statistical theory essential for graduate students preparing for work on a Ph.D. degree in statistics. The first chapter provides a quick overview of concepts and results in measure-theoretic probability theory that are usefulin statistics. The second chapter introduces some fundamental concepts in statistical decision theory and inference. Chapters 3-7 contain detailed studies on some important topics: unbiased estimation, parametric estimation, nonparametric estimation, hypothesis testing, and confidence sets. A large number of exercises in each chapter provide not only practice problems for students, but also many additional results. In addition to the classical results that are typically covered in a textbook of a similar level, this book introduces some topics in modern statistical theory that have been developed in recent years, such as Markov chain Monte Carlo, quasi-likelihoods, empirical likelihoods, statistical functionals, generalized estimation equations, the jackknife, and the bootstrap. Jun Shao is Professor of Statistics at the University of Wisconsin, Madison. Also available: Jun Shao and Dongsheng Tu, The Jackknife and Bootstrap, Springer- Verlag New York, Inc., 1995, Cloth, 536 pp., 0-387-94515-6.
Selected papers submitted by participants of the international Conference "Stochastic Analysis and Applied Probability 2010" ( www.saap2010.org ) make up the basis of this volume. The SAAP 2010 was held in Tunisia, from 7-9 October, 2010, and was organized by the "Applied Mathematics & Mathematical Physics" research unit of the preparatory institute to the military academies of Sousse (Tunisia), chaired by Mounir Zili. The papers cover theoretical, numerical and applied aspects of stochastic processes and stochastic differential equations. The study of such topic is motivated in part by the need to model, understand, forecast and control the behavior of many natural phenomena that evolve in time in a random way. Such phenomena appear in the fields of finance, telecommunications, economics, biology, geology, demography, physics, chemistry, signal processing and modern control theory, to mention just a few. As this book emphasizes the importance of numerical and theoretical studies of the stochastic differential equations and stochastic processes, it will be useful for a wide spectrum of researchers in applied probability, stochastic numerical and theoretical analysis and statistics, as well as for graduate students. To make it more complete and accessible for graduate students, practitioners and researchers, the editors Mounir Zili and Daria Filatova have included a survey dedicated to the basic concepts of numerical analysis of the stochastic differential equations, written by Henri Schurz.
The Equation of Knowledge: From Bayes' Rule to a Unified Philosophy of Science introduces readers to the Bayesian approach to science: teasing out the link between probability and knowledge. The author strives to make this book accessible to a very broad audience, suitable for professionals, students, and academics, as well as the enthusiastic amateur scientist/mathematician. This book also shows how Bayesianism sheds new light on nearly all areas of knowledge, from philosophy to mathematics, science and engineering, but also law, politics and everyday decision-making. Bayesian thinking is an important topic for research, which has seen dramatic progress in the recent years, and has a significant role to play in the understanding and development of AI and Machine Learning, among many other things. This book seeks to act as a tool for proselytising the benefits and limits of Bayesianism to a wider public. Features Presents the Bayesian approach as a unifying scientific method for a wide range of topics Suitable for a broad audience, including professionals, students, and academics Provides a more accessible, philosophical introduction to the subject that is offered elsewhere
In contrast to the prevailing tradition in epistemology, the focus in this book is on low-level inferences, i.e., those inferences that we are usually not consciously aware of and that we share with the cat nearby which infers that the bird which she sees picking grains from the dirt, is able to fly. Presumably, such inferences are not generated by explicit logical reasoning, but logical methods can be used to describe and analyze such inferences. Part 1 gives a purely system-theoretic explication of belief and inference. Part 2 adds a reliabilist theory of justification for inference, with a qualitative notion of reliability being employed. Part 3 recalls and extends various systems of deductive and nonmonotonic logic and thereby explains the semantics of absolute and high reliability. In Part 4 it is proven that qualitative neural networks are able to draw justified deductive and nonmonotonic inferences on the basis of distributed representations. This is derived from a soundness/completeness theorem with regard to cognitive semantics of nonmonotonic reasoning. The appendix extends the theory both logically and ontologically, and relates it to A. Goldman's reliability account of justified belief.
An incomparably useful examination of statistical methods for comparison The nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not independent of the outcome. Statistical Group Comparison brings together a broad range of statistical methods for comparison developed over recent years. The book covers a wide spectrum of topics from the simplest comparison of two means or rates to more recently developed statistics including double generalized linear models and Bayesian as well as hierarchical methods. Coverage includes:
Examples are drawn from the social, political, economic, and biomedical sciences; many can be implemented using widely available software. Because of the range and the generality of the statistical methods covered, researchers across many disciplines–beyond the social, political, economic, and biomedical sciences–will find the book a convenient reference for many a research situation where comparisons may come naturally.
This thesis explores advanced Bayesian statistical methods for extracting key information for cosmological model selection, parameter inference and forecasting from astrophysical observations. Bayesian model selection provides a measure of how good models in a set are relative to each other - but what if the best model is missing and not included in the set? Bayesian Doubt is an approach which addresses this problem and seeks to deliver an absolute rather than a relative measure of how good a model is. Supernovae type Ia were the first astrophysical observations to indicate the late time acceleration of the Universe - this work presents a detailed Bayesian Hierarchical Model to infer the cosmological parameters (in particular dark energy) from observations of these supernovae type Ia.
The seminar on Stochastic Analysis and Mathematical Physics of the Ca tholic University of Chile, started in Santiago in 1984, has being followed and enlarged since 1995 by a series of international workshops aimed at pro moting a wide-spectrum dialogue between experts on the fields of classical and quantum stochastic analysis, mathematical physics, and physics. This volume collects most of the contributions to the Fourth Interna tional Workshop on Stochastic Analysis and Mathematical Physics (whose Spanish abbreviation is "ANESTOC"; in English, "STAMP"), held in San tiago, Chile, from January 5 to 11, 2000. The workshop style stimulated a vivid exchange of ideas which finally led to a number of written con tributions which I am glad to introduce here. However, we are currently submitted to a sort of invasion of proceedings books, and we do not want to increase our own shelves with a new one of the like. On the other hand, the editors of conference proceedings have to use different exhausting and com pulsive strategies to persuade authors to write and provide texts in time, a task which terrifies us. As a result, this volume is aimed at smoothly start ing a new kind of publication. What we would like to have is a collection of books organized like our seminar.
This book collects lectures given by the plenary speakers at the 10th International ISAAC Congress, held in Macau, China in 2015. The contributions, authored by eminent specialists, present some of the most exciting recent developments in mathematical analysis, probability theory, and related applications. Topics include: partial differential equations in mathematical physics, Fourier analysis, probability and Brownian motion, numerical analysis, and reproducing kernels. The volume also presents a lecture on the visual exploration of complex functions using the domain coloring technique. Thanks to the accessible style used, readers only need a basic command of calculus.
Considering Poisson random measures as the driving sources for stochastic (partial) differential equations allows us to incorporate jumps and to model sudden, unexpected phenomena. By using such equations the present book introduces a new method for modeling the states of complex systems perturbed by random sources over time, such as interest rates in financial markets or temperature distributions in a specific region. It studies properties of the solutions of the stochastic equations, observing the long-term behavior and the sensitivity of the solutions to changes in the initial data. The authors consider an integration theory of measurable and adapted processes in appropriate Banach spaces as well as the non-Gaussian case, whereas most of the literature only focuses on predictable settings in Hilbert spaces. The book is intended for graduate students and researchers in stochastic (partial) differential equations, mathematical finance and non-linear filtering and assumes a knowledge of the required integration theory, existence and uniqueness results and stability theory. The results will be of particular interest to natural scientists and the finance community. Readers should ideally be familiar with stochastic processes and probability theory in general, as well as functional analysis and in particular the theory of operator semigroups.
This edited volume addresses the importance of mathematics for industry and society by presenting highlights from contract research at the Department of Applied Mathematics at SINTEF, the largest independent research organization in Scandinavia. Examples range from computer-aided geometric design, via general purpose computing on graphics cards, to reservoir simulation for enhanced oil recovery. Contributions are written in a tutorial style.
The most accessible introduction to the theory and practice of multivariate analysis Multivariate Statistical Inference and Applications is a user-friendly introduction to basic multivariate analysis theory and practice for statistics majors as well as nonmajors with little or no background in theoretical statistics. Among the many special features of this extremely accessible first text on multivariate analysis are:
These same features also make Multivariate Statistical Inference and Applications an excellent professional resource for scientists and clinicians who need to acquaint themselves with multivariate techniques. It can be used as a stand-alone introduction or in concert with its more methods-oriented sibling volume, the critically acclaimed Methods of Multivariate Analysis.
The book is meant to serve two purposes. The first and more obvious
one is to present state of the art results in algebraic research
into residuated structures related to substructural logics. The
second, less obvious but equally important, is to provide a
reasonably gentle introduction to algebraic logic. At the
beginning, the second objective is predominant. Thus, in the first
few chapters the reader will find a primer of universal algebra for
logicians, a crash course in nonclassical logics for algebraists,
an introduction to residuated structures, an outline of
Gentzen-style calculi as well as some titbits of proof theory - the
celebrated Hauptsatz, or cut elimination theorem, among them. These
lead naturally to a discussion of interconnections between logic
and algebra, where we try to demonstrate how they form two sides of
the same coin. We envisage that the initial chapters could be used
as a textbook for a graduate course, perhaps entitled Algebra and
Substructural Logics.
The 2006 INFORMS Expository Writing Award-winning and best-selling author Sheldon Ross (University of Southern California) teams up with Erol Pekz (Boston University) to bring you this textbook for undergraduate and graduate students in statistics, mathematics, engineering, finance, and actuarial science. This is a guided tour designed to give familiarity with advanced topics in probability without having to wade through the exhaustive coverage of the classic advanced probability theory books. Topics include measure theory, limit theorems, bounding probabilities and expectations, coupling and Stein's method, martingales, Markov chains, renewal theory, and Brownian motion. No other text covers all these advanced topics rigorously but at such an accessible level; all you need is calculus and material from a first undergraduate course in probability.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
Supervision, condition-monitoring, fault detection, fault diagnosis and fault management play an increasing role for technical processes and vehicles in order to improve reliability, availability, maintenance and lifetime. For safety-related processes fault-tolerant systems with redundancy are required in order to reach comprehensive system integrity. This book is a sequel of the book Fault-Diagnosis Systems published in 2006, where the basic methods were described. After a short introduction into fault-detection and fault-diagnosis methods the book shows how these methods can be applied for a selection of 20 real technical components and processes as examples, such as: Electrical drives (DC, AC) Electrical actuators Fluidic actuators (hydraulic, pneumatic) Centrifugal and reciprocating pumps Pipelines (leak detection) Industrial robots Machine tools (main and feed drive, drilling, milling, grinding) Heat exchangers Also realized fault-tolerant systems for electrical drives, actuators and sensors are presented. The book describes why and how the various signal-model-based and process-model-based methods were applied and which experimental results could be achieved. In several cases a combination of different methods was most successful. The book is dedicated to graduate students of electrical, mechanical, chemical engineering and computer science and for engineers.
This book is intended to provide a text on statistical methods for detecting clus ters and/or clustering of health events that is of interest to ?nal year undergraduate and graduate level statistics, biostatistics, epidemiology, and geography students but will also be of relevance to public health practitioners, statisticians, biostatisticians, epidemiologists, medical geographers, human geographers, environmental scien tists, and ecologists. Prerequisites are introductory biostatistics and epidemiology courses. With increasing public health concerns about environmental risks, the need for sophisticated methods for analyzing spatial health events is immediate. Further more, the research area of statistical tests for disease clustering now attracts a wide audience due to the perceived need to implement wide ranging monitoring systems to detect possible health related bioterrorism activity. With this background and the development of the geographical information system (GIS), the analysis of disease clustering of health events has seen considerable development over the last decade. Therefore, several excellent books on spatial epidemiology and statistics have re cently been published. However, it seems to me that there is no other book solely focusing on statistical methods for disease clustering. I hope that readers will ?nd this book useful and interesting as an introduction to the subject.
This book contains the lectures given at the II Canference an Dynamics and Randamness held at the Centro de Modelamiento Matematico of the Universidad de Chile, from December 9th to 13th, 2002. This meeting brought together mathematicians, theoretical physicists, theoretical computer scientists, and graduate students interested in fields related to probability theory, ergodic theory, symbolic and topological dynamics. We would like to express our gratitude to an the participants of the conference and to the people who contributed to its orga- nization. In particular, to Pierre Collet, BerIiard Rost and Karl Petersen for their scientific advise. We want to thank warmly the authors of each chapter for their stimulating lectures and for their manuscripts devoted to a various of appealing subjects in probability and dynamics: to Jean Bertoin for his course on Some aspects of random fragmentation in con- tinuous time; to Anton Bovier for his course on Metastability and ageing in stochastic dynamics; to Steve Lalley for his course on AI- gebraic systems of generat ing functions and return probabilities for random walks; to Elon Lindenstrauss for his course on Recurrent measures and measure rigidity; to Sylvie Meleard for her course on Stochastic particle approximations for two-dimensional N avier- Stokes equations; and to Anatoly Vershik for his course on Random and universal metric spaces.
One of the main aims of this book is to exhibit some fruitful links between renewal theory and regular variation of functions. Applications of renewal processes play a key role in actuarial and financial mathematics as well as in engineering, operations research and other fields of applied mathematics. On the other hand, regular variation of functions is a property that features prominently in many fields of mathematics. The structure of the book reflects the historical development of the authors' research work and approach - first some applications are discussed, after which a basic theory is created, and finally further applications are provided. The authors present a generalized and unified approach to the asymptotic behavior of renewal processes, involving cases of dependent inter-arrival times. This method works for other important functionals as well, such as first and last exit times or sojourn times (also under dependencies), and it can be used to solve several other problems. For example, various applications in function analysis concerning Abelian and Tauberian theorems can be studied as well as those in studies of the asymptotic behavior of solutions of stochastic differential equations. The classes of functions that are investigated and used in a probabilistic context extend the well-known Karamata theory of regularly varying functions and thus are also of interest in the theory of functions. The book provides a rigorous treatment of the subject and may serve as an introduction to the field. It is aimed at researchers and students working in probability, the theory of stochastic processes, operations research, mathematical statistics, the theory of functions, analytic number theory and complex analysis, as well as economists with a mathematical background. Readers should have completed introductory courses in analysis and probability theory.
The Handbook of Statistics, a series of self-contained reference books. Each volume is devoted to a particular topic in statistics. Every chapter is written by prominent workers in the area to which the volume is devoted. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. This volume presents a state of the art exposition of topics in the field of industrial statistics. It serves as an invaluable reference for the researchers in industrial statistics/industrial engineering and an up to date source of information for practicing statisticians/industrial engineers. A variety of topics in the areas of industrial process monitoring, industrial experimentation, industrial modelling and data analysis are covered and are authored by leading researchers or practitioners in the particular specialized topic. Targeting the audiences of researchers in academia as well as practitioners and consultants in industry, the book provides comprehensive accounts of the relevant topics. In addition, whenever applicable ample data analytic illustrations are provided with the help of real world data.
This textbook has been developed from the lecture notes for a one-semester course on stochastic modelling. It reviews the basics of probability theory and then covers the following topics: Markov chains, Markov decision processes, jump Markov processes, elements of queueing theory, basic renewal theory, elements of time series and simulation. Rigorous proofs are often replaced with sketches of arguments -- with indications as to why a particular result holds, and also how it is connected with other results -- and illustrated by examples. Wherever possible, the book includes references to more specialised texts containing both proofs and more advanced material related to the topics covered.
Highly praised for its exceptional clarity, conversational style and useful examples, Introductory Business Statistics, 7e, International Edition was written specifically for you. This proven, popular text cuts through the jargon to help you understand fundamental statistical concepts and why they are important to you, your world, and your career. The text's outstanding illustrations, friendly language, non-technical terminology, and current, real-world examples will capture your interest and prepare you for success right from the start.
The book is an introduction to modern probability theory written by one of the famous experts in this area. Readers will learn about the basic concepts of probability and its applications, preparing them for more advanced and specialized works.
This volume presents a collection of papers covering applications from a wide range of systems with infinitely many degrees of freedom studied using techniques from stochastic and infinite dimensional analysis, e.g. Feynman path integrals, the statistical mechanics of polymer chains, complex networks, and quantum field theory. Systems of infinitely many degrees of freedom create their particular mathematical challenges which have been addressed by different mathematical theories, namely in the theories of stochastic processes, Malliavin calculus, and especially white noise analysis. These proceedings are inspired by a conference held on the occasion of Prof. Ludwig Streit's 75th birthday and celebrate his pioneering and ongoing work in these fields.
The field of sensory science, the perception science of the food industry, increasingly requires a working knowledge of statistics for the evaluation of data. However, most sensory scientists are not also expert statisticians. This highly readable book presents complex statistical tools such as Anova in a way that is easily understood by the practising sensory scientist. In Analysis of Variance for sensory Data, written jointly by statisticians and food scientists, the reader is taken by the hand and guided through tests such as Anova. Using real examples from the food industry, practical implications are stressed rather than the theoretical background. The result of this is that the reader will be able to apply advanced Anova teqhniques to a variety of problems and learn how to interpret the results. The book is intended as a workbook for all students of sensory analysis who would gain from a knowledge of statistical techniques. |
You may like...
Gender Divide and the Computer Game…
Julie Prescott, Jan Bogg
Hardcover
R4,462
Discovery Miles 44 620
Mythopoeic Narrative in The Legend of…
Anthony Cirilla, Vincent Rone
Hardcover
R4,490
Discovery Miles 44 900
Parallel Lines - 8 Parallel-Spanning…
Chad Bowser, Anthony Boyd, …
Paperback
R690
Discovery Miles 6 900
|