![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This relevant and timely thesis presents the pioneering use of risk-based assessment tools to analyse the interaction between electrical and mechanical systems in mixed AC/DC power networks at subsynchronous frequencies. It also discusses assessing the effect of uncertainties in the mechanical parameters of a turbine generator on SSR in a meshed network with both symmetrical and asymmetrical compensation systems. The research presented has resulted in 12 publications including three top international journal papers (IEEE Transactions on Power Systems) and nine international conference publications, including two award-winning papers.
Mathematically, natural disasters of all types are characterized by heavy tailed distributions. The analysis of such distributions with common methods, such as averages and dispersions, can therefore lead to erroneous conclusions. The statistical methods described in this book avoid such pitfalls. Seismic disasters are studied, primarily thanks to the availability of an ample statistical database. New approaches are presented to seismic risk estimation and forecasting the damage caused by earthquakes, ranging from typical, moderate events to very rare, extreme disasters. Analysis of these latter events is based on the limit theorems of probability and the duality of the generalized Pareto distribution and generalized extreme value distribution. It is shown that the parameter most widely used to estimate seismic risk - Mmax, the maximum possible earthquake value - is potentially non-robust. Robust analogues of this parameter are suggested and calculated for some seismic catalogues. Trends in the costs inferred by damage from natural disasters as related to changing social and economic situations are examined for different regions. The results obtained argue for sustainable development, whereas entirely different, incorrect conclusions can be drawn if the specific properties of the heavy-tailed distribution and change in completeness of data on natural hazards are neglected. This pioneering work is directed at risk assessment specialists in general, seismologists, administrators and all those interested in natural disasters and their impact on society.
This book presents the refereed proceedings of the Twelfth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at Stanford University (California) in August 2016. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising in particular, in finance, statistics, computer graphics and the solution of PDEs.
An outstanding practical guide to the most common chemometric methods in use today Chemometrics explains how to apply the most widely used pattern recognition and multivariate calibration techniques to solve data analysis problems. This practical guide describes all key methods in terms of processes and applications in order to help the reader easily identify the best technique for a given situation. Drawing on years of industrial experience with chemometric tools, the authors share their six basic steps, or "habits," for achieving reliable chemometric results, and cover key areas such as:
Complete with helpful chapter-end summaries, technical references, and more, this book is an invaluable hands-on resource for analytical chemists and laboratory scientists who use chemometrics in their work.
This is a book about regression analysis, that is, the situation in statistics where the distribution of a response (or outcome) variable is related to - planatory variables (or covariates). This is an extremely common situation in the application of statistical methods in many ?elds, andlinear regression, - gistic regression, and Cox proportional hazards regression are frequently used for quantitative, binary, and survival time outcome variables, respectively. Several books on these topics have appeared and for that reason one may well ask why we embark on writing still another book on regression. We have two main reasons for doing this: 1. First, we want to highlightsimilaritiesamonglinear, logistic, proportional hazards, andotherregressionmodelsthatincludealinearpredictor. These modelsareoftentreatedentirelyseparatelyintextsinspiteofthefactthat alloperationsonthemodelsdealingwiththelinearpredictorareprecisely the same, including handling of categorical and quantitative covariates, testing for linearity and studying interactions. 2. Second, we want to emphasize that, for any type of outcome variable, multiple regression models are composed of simple building blocks that areaddedtogetherinthelinearpredictor: thatis, t-tests, one-wayanalyses of variance and simple linear regressions for quantitative outcomes, 2x2, 2x(k+1) tables and simple logistic regressions for binary outcomes, and 2-and (k+1)-sample logrank testsand simple Cox regressionsfor survival data. Thishastwoconsequences. Allthesesimpleandwellknownmethods can be considered as special cases of the regression models. On the other hand, the e?ect of a single explanatory variable in a multiple regression model can be interpreted in a way similar to that obtained in the simple analysis, however, now valid only for the other explanatory variables in the model "held ?xed.""
This graduate textbook covers topics in statistical theory essential for graduate students preparing for work on a Ph.D. degree in statistics. The first chapter provides a quick overview of concepts and results in measure-theoretic probability theory that are usefulin statistics. The second chapter introduces some fundamental concepts in statistical decision theory and inference. Chapters 3-7 contain detailed studies on some important topics: unbiased estimation, parametric estimation, nonparametric estimation, hypothesis testing, and confidence sets. A large number of exercises in each chapter provide not only practice problems for students, but also many additional results. In addition to the classical results that are typically covered in a textbook of a similar level, this book introduces some topics in modern statistical theory that have been developed in recent years, such as Markov chain Monte Carlo, quasi-likelihoods, empirical likelihoods, statistical functionals, generalized estimation equations, the jackknife, and the bootstrap. Jun Shao is Professor of Statistics at the University of Wisconsin, Madison. Also available: Jun Shao and Dongsheng Tu, The Jackknife and Bootstrap, Springer- Verlag New York, Inc., 1995, Cloth, 536 pp., 0-387-94515-6.
This edited volume addresses the importance of mathematics for industry and society by presenting highlights from contract research at the Department of Applied Mathematics at SINTEF, the largest independent research organization in Scandinavia. Examples range from computer-aided geometric design, via general purpose computing on graphics cards, to reservoir simulation for enhanced oil recovery. Contributions are written in a tutorial style.
In contrast to the prevailing tradition in epistemology, the focus in this book is on low-level inferences, i.e., those inferences that we are usually not consciously aware of and that we share with the cat nearby which infers that the bird which she sees picking grains from the dirt, is able to fly. Presumably, such inferences are not generated by explicit logical reasoning, but logical methods can be used to describe and analyze such inferences. Part 1 gives a purely system-theoretic explication of belief and inference. Part 2 adds a reliabilist theory of justification for inference, with a qualitative notion of reliability being employed. Part 3 recalls and extends various systems of deductive and nonmonotonic logic and thereby explains the semantics of absolute and high reliability. In Part 4 it is proven that qualitative neural networks are able to draw justified deductive and nonmonotonic inferences on the basis of distributed representations. This is derived from a soundness/completeness theorem with regard to cognitive semantics of nonmonotonic reasoning. The appendix extends the theory both logically and ontologically, and relates it to A. Goldman's reliability account of justified belief.
The Equation of Knowledge: From Bayes' Rule to a Unified Philosophy of Science introduces readers to the Bayesian approach to science: teasing out the link between probability and knowledge. The author strives to make this book accessible to a very broad audience, suitable for professionals, students, and academics, as well as the enthusiastic amateur scientist/mathematician. This book also shows how Bayesianism sheds new light on nearly all areas of knowledge, from philosophy to mathematics, science and engineering, but also law, politics and everyday decision-making. Bayesian thinking is an important topic for research, which has seen dramatic progress in the recent years, and has a significant role to play in the understanding and development of AI and Machine Learning, among many other things. This book seeks to act as a tool for proselytising the benefits and limits of Bayesianism to a wider public. Features Presents the Bayesian approach as a unifying scientific method for a wide range of topics Suitable for a broad audience, including professionals, students, and academics Provides a more accessible, philosophical introduction to the subject that is offered elsewhere
This thesis explores advanced Bayesian statistical methods for extracting key information for cosmological model selection, parameter inference and forecasting from astrophysical observations. Bayesian model selection provides a measure of how good models in a set are relative to each other - but what if the best model is missing and not included in the set? Bayesian Doubt is an approach which addresses this problem and seeks to deliver an absolute rather than a relative measure of how good a model is. Supernovae type Ia were the first astrophysical observations to indicate the late time acceleration of the Universe - this work presents a detailed Bayesian Hierarchical Model to infer the cosmological parameters (in particular dark energy) from observations of these supernovae type Ia.
The seminar on Stochastic Analysis and Mathematical Physics of the Ca tholic University of Chile, started in Santiago in 1984, has being followed and enlarged since 1995 by a series of international workshops aimed at pro moting a wide-spectrum dialogue between experts on the fields of classical and quantum stochastic analysis, mathematical physics, and physics. This volume collects most of the contributions to the Fourth Interna tional Workshop on Stochastic Analysis and Mathematical Physics (whose Spanish abbreviation is "ANESTOC"; in English, "STAMP"), held in San tiago, Chile, from January 5 to 11, 2000. The workshop style stimulated a vivid exchange of ideas which finally led to a number of written con tributions which I am glad to introduce here. However, we are currently submitted to a sort of invasion of proceedings books, and we do not want to increase our own shelves with a new one of the like. On the other hand, the editors of conference proceedings have to use different exhausting and com pulsive strategies to persuade authors to write and provide texts in time, a task which terrifies us. As a result, this volume is aimed at smoothly start ing a new kind of publication. What we would like to have is a collection of books organized like our seminar.
This book collects lectures given by the plenary speakers at the 10th International ISAAC Congress, held in Macau, China in 2015. The contributions, authored by eminent specialists, present some of the most exciting recent developments in mathematical analysis, probability theory, and related applications. Topics include: partial differential equations in mathematical physics, Fourier analysis, probability and Brownian motion, numerical analysis, and reproducing kernels. The volume also presents a lecture on the visual exploration of complex functions using the domain coloring technique. Thanks to the accessible style used, readers only need a basic command of calculus.
Considering Poisson random measures as the driving sources for stochastic (partial) differential equations allows us to incorporate jumps and to model sudden, unexpected phenomena. By using such equations the present book introduces a new method for modeling the states of complex systems perturbed by random sources over time, such as interest rates in financial markets or temperature distributions in a specific region. It studies properties of the solutions of the stochastic equations, observing the long-term behavior and the sensitivity of the solutions to changes in the initial data. The authors consider an integration theory of measurable and adapted processes in appropriate Banach spaces as well as the non-Gaussian case, whereas most of the literature only focuses on predictable settings in Hilbert spaces. The book is intended for graduate students and researchers in stochastic (partial) differential equations, mathematical finance and non-linear filtering and assumes a knowledge of the required integration theory, existence and uniqueness results and stability theory. The results will be of particular interest to natural scientists and the finance community. Readers should ideally be familiar with stochastic processes and probability theory in general, as well as functional analysis and in particular the theory of operator semigroups.
The most accessible introduction to the theory and practice of multivariate analysis Multivariate Statistical Inference and Applications is a user-friendly introduction to basic multivariate analysis theory and practice for statistics majors as well as nonmajors with little or no background in theoretical statistics. Among the many special features of this extremely accessible first text on multivariate analysis are:
These same features also make Multivariate Statistical Inference and Applications an excellent professional resource for scientists and clinicians who need to acquaint themselves with multivariate techniques. It can be used as a stand-alone introduction or in concert with its more methods-oriented sibling volume, the critically acclaimed Methods of Multivariate Analysis.
An incomparably useful examination of statistical methods for comparison The nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not independent of the outcome. Statistical Group Comparison brings together a broad range of statistical methods for comparison developed over recent years. The book covers a wide spectrum of topics from the simplest comparison of two means or rates to more recently developed statistics including double generalized linear models and Bayesian as well as hierarchical methods. Coverage includes:
Examples are drawn from the social, political, economic, and biomedical sciences; many can be implemented using widely available software. Because of the range and the generality of the statistical methods covered, researchers across many disciplines–beyond the social, political, economic, and biomedical sciences–will find the book a convenient reference for many a research situation where comparisons may come naturally.
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, control, and finance.
The book is meant to serve two purposes. The first and more obvious
one is to present state of the art results in algebraic research
into residuated structures related to substructural logics. The
second, less obvious but equally important, is to provide a
reasonably gentle introduction to algebraic logic. At the
beginning, the second objective is predominant. Thus, in the first
few chapters the reader will find a primer of universal algebra for
logicians, a crash course in nonclassical logics for algebraists,
an introduction to residuated structures, an outline of
Gentzen-style calculi as well as some titbits of proof theory - the
celebrated Hauptsatz, or cut elimination theorem, among them. These
lead naturally to a discussion of interconnections between logic
and algebra, where we try to demonstrate how they form two sides of
the same coin. We envisage that the initial chapters could be used
as a textbook for a graduate course, perhaps entitled Algebra and
Substructural Logics.
The 2006 INFORMS Expository Writing Award-winning and best-selling author Sheldon Ross (University of Southern California) teams up with Erol Pekz (Boston University) to bring you this textbook for undergraduate and graduate students in statistics, mathematics, engineering, finance, and actuarial science. This is a guided tour designed to give familiarity with advanced topics in probability without having to wade through the exhaustive coverage of the classic advanced probability theory books. Topics include measure theory, limit theorems, bounding probabilities and expectations, coupling and Stein's method, martingales, Markov chains, renewal theory, and Brownian motion. No other text covers all these advanced topics rigorously but at such an accessible level; all you need is calculus and material from a first undergraduate course in probability.
The Handbook of Statistics, a series of self-contained reference books. Each volume is devoted to a particular topic in statistics. Every chapter is written by prominent workers in the area to which the volume is devoted. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. This volume presents a state of the art exposition of topics in the field of industrial statistics. It serves as an invaluable reference for the researchers in industrial statistics/industrial engineering and an up to date source of information for practicing statisticians/industrial engineers. A variety of topics in the areas of industrial process monitoring, industrial experimentation, industrial modelling and data analysis are covered and are authored by leading researchers or practitioners in the particular specialized topic. Targeting the audiences of researchers in academia as well as practitioners and consultants in industry, the book provides comprehensive accounts of the relevant topics. In addition, whenever applicable ample data analytic illustrations are provided with the help of real world data.
Supervision, condition-monitoring, fault detection, fault diagnosis and fault management play an increasing role for technical processes and vehicles in order to improve reliability, availability, maintenance and lifetime. For safety-related processes fault-tolerant systems with redundancy are required in order to reach comprehensive system integrity. This book is a sequel of the book Fault-Diagnosis Systems published in 2006, where the basic methods were described. After a short introduction into fault-detection and fault-diagnosis methods the book shows how these methods can be applied for a selection of 20 real technical components and processes as examples, such as: Electrical drives (DC, AC) Electrical actuators Fluidic actuators (hydraulic, pneumatic) Centrifugal and reciprocating pumps Pipelines (leak detection) Industrial robots Machine tools (main and feed drive, drilling, milling, grinding) Heat exchangers Also realized fault-tolerant systems for electrical drives, actuators and sensors are presented. The book describes why and how the various signal-model-based and process-model-based methods were applied and which experimental results could be achieved. In several cases a combination of different methods was most successful. The book is dedicated to graduate students of electrical, mechanical, chemical engineering and computer science and for engineers.
This book encompasses empirical evidences to understand the application of data analytical techniques in emerging contexts. Varied studies relating to manufacturing and services sectors including healthcare, banking, information technology, power, education sector etc. stresses upon the systematic approach followed in applying the data analytical techniques; and also analyses how these techniques are effective in decision-making in different contexts. Especially, the application of regression modeling, financial modelling, multi-group modeling, cluster analysis, and sentiment analysis will help the readers in understanding critical business scenarios in the best possible way, and which later can help them in arriving at best solution for the business related problems. The individual chapters will help the readers in understanding the role of specific data analytic tools and techniques in resolving business operational issues experienced in manufacturing and service organisations in India and in developing countries. The book offers a relevant resource that will help readers in the application and interpretation of data analytical statistical practices relating to emerging issues like customer experience, marketing capability, quality of manufactured products, strategic orientation, high-performance human resource policy, employee resilience, financial resources, etc. This book will be of interest to a professional audience that include practitioners, policy makers, NGOs, managers and employees as well as academicians, researchers and students.
This book is intended to provide a text on statistical methods for detecting clus ters and/or clustering of health events that is of interest to ?nal year undergraduate and graduate level statistics, biostatistics, epidemiology, and geography students but will also be of relevance to public health practitioners, statisticians, biostatisticians, epidemiologists, medical geographers, human geographers, environmental scien tists, and ecologists. Prerequisites are introductory biostatistics and epidemiology courses. With increasing public health concerns about environmental risks, the need for sophisticated methods for analyzing spatial health events is immediate. Further more, the research area of statistical tests for disease clustering now attracts a wide audience due to the perceived need to implement wide ranging monitoring systems to detect possible health related bioterrorism activity. With this background and the development of the geographical information system (GIS), the analysis of disease clustering of health events has seen considerable development over the last decade. Therefore, several excellent books on spatial epidemiology and statistics have re cently been published. However, it seems to me that there is no other book solely focusing on statistical methods for disease clustering. I hope that readers will ?nd this book useful and interesting as an introduction to the subject.
This book contains the lectures given at the II Canference an Dynamics and Randamness held at the Centro de Modelamiento Matematico of the Universidad de Chile, from December 9th to 13th, 2002. This meeting brought together mathematicians, theoretical physicists, theoretical computer scientists, and graduate students interested in fields related to probability theory, ergodic theory, symbolic and topological dynamics. We would like to express our gratitude to an the participants of the conference and to the people who contributed to its orga- nization. In particular, to Pierre Collet, BerIiard Rost and Karl Petersen for their scientific advise. We want to thank warmly the authors of each chapter for their stimulating lectures and for their manuscripts devoted to a various of appealing subjects in probability and dynamics: to Jean Bertoin for his course on Some aspects of random fragmentation in con- tinuous time; to Anton Bovier for his course on Metastability and ageing in stochastic dynamics; to Steve Lalley for his course on AI- gebraic systems of generat ing functions and return probabilities for random walks; to Elon Lindenstrauss for his course on Recurrent measures and measure rigidity; to Sylvie Meleard for her course on Stochastic particle approximations for two-dimensional N avier- Stokes equations; and to Anatoly Vershik for his course on Random and universal metric spaces.
One of the main aims of this book is to exhibit some fruitful links between renewal theory and regular variation of functions. Applications of renewal processes play a key role in actuarial and financial mathematics as well as in engineering, operations research and other fields of applied mathematics. On the other hand, regular variation of functions is a property that features prominently in many fields of mathematics. The structure of the book reflects the historical development of the authors' research work and approach - first some applications are discussed, after which a basic theory is created, and finally further applications are provided. The authors present a generalized and unified approach to the asymptotic behavior of renewal processes, involving cases of dependent inter-arrival times. This method works for other important functionals as well, such as first and last exit times or sojourn times (also under dependencies), and it can be used to solve several other problems. For example, various applications in function analysis concerning Abelian and Tauberian theorems can be studied as well as those in studies of the asymptotic behavior of solutions of stochastic differential equations. The classes of functions that are investigated and used in a probabilistic context extend the well-known Karamata theory of regularly varying functions and thus are also of interest in the theory of functions. The book provides a rigorous treatment of the subject and may serve as an introduction to the field. It is aimed at researchers and students working in probability, the theory of stochastic processes, operations research, mathematical statistics, the theory of functions, analytic number theory and complex analysis, as well as economists with a mathematical background. Readers should have completed introductory courses in analysis and probability theory.
|
You may like...
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Stochastic Processes and Their…
Christo Ananth, N. Anbazhagan, …
Hardcover
R6,687
Discovery Miles 66 870
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,469
Discovery Miles 54 690
|