![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
One of the main aims of this book is to exhibit some fruitful links between renewal theory and regular variation of functions. Applications of renewal processes play a key role in actuarial and financial mathematics as well as in engineering, operations research and other fields of applied mathematics. On the other hand, regular variation of functions is a property that features prominently in many fields of mathematics. The structure of the book reflects the historical development of the authors' research work and approach - first some applications are discussed, after which a basic theory is created, and finally further applications are provided. The authors present a generalized and unified approach to the asymptotic behavior of renewal processes, involving cases of dependent inter-arrival times. This method works for other important functionals as well, such as first and last exit times or sojourn times (also under dependencies), and it can be used to solve several other problems. For example, various applications in function analysis concerning Abelian and Tauberian theorems can be studied as well as those in studies of the asymptotic behavior of solutions of stochastic differential equations. The classes of functions that are investigated and used in a probabilistic context extend the well-known Karamata theory of regularly varying functions and thus are also of interest in the theory of functions. The book provides a rigorous treatment of the subject and may serve as an introduction to the field. It is aimed at researchers and students working in probability, the theory of stochastic processes, operations research, mathematical statistics, the theory of functions, analytic number theory and complex analysis, as well as economists with a mathematical background. Readers should have completed introductory courses in analysis and probability theory.
The Handbook of Statistics, a series of self-contained reference books. Each volume is devoted to a particular topic in statistics. Every chapter is written by prominent workers in the area to which the volume is devoted. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. This volume presents a state of the art exposition of topics in the field of industrial statistics. It serves as an invaluable reference for the researchers in industrial statistics/industrial engineering and an up to date source of information for practicing statisticians/industrial engineers. A variety of topics in the areas of industrial process monitoring, industrial experimentation, industrial modelling and data analysis are covered and are authored by leading researchers or practitioners in the particular specialized topic. Targeting the audiences of researchers in academia as well as practitioners and consultants in industry, the book provides comprehensive accounts of the relevant topics. In addition, whenever applicable ample data analytic illustrations are provided with the help of real world data.
This textbook has been developed from the lecture notes for a one-semester course on stochastic modelling. It reviews the basics of probability theory and then covers the following topics: Markov chains, Markov decision processes, jump Markov processes, elements of queueing theory, basic renewal theory, elements of time series and simulation. Rigorous proofs are often replaced with sketches of arguments -- with indications as to why a particular result holds, and also how it is connected with other results -- and illustrated by examples. Wherever possible, the book includes references to more specialised texts containing both proofs and more advanced material related to the topics covered.
Highly praised for its exceptional clarity, conversational style and useful examples, Introductory Business Statistics, 7e, International Edition was written specifically for you. This proven, popular text cuts through the jargon to help you understand fundamental statistical concepts and why they are important to you, your world, and your career. The text's outstanding illustrations, friendly language, non-technical terminology, and current, real-world examples will capture your interest and prepare you for success right from the start.
The book is an introduction to modern probability theory written by one of the famous experts in this area. Readers will learn about the basic concepts of probability and its applications, preparing them for more advanced and specialized works.
This volume presents a collection of papers covering applications from a wide range of systems with infinitely many degrees of freedom studied using techniques from stochastic and infinite dimensional analysis, e.g. Feynman path integrals, the statistical mechanics of polymer chains, complex networks, and quantum field theory. Systems of infinitely many degrees of freedom create their particular mathematical challenges which have been addressed by different mathematical theories, namely in the theories of stochastic processes, Malliavin calculus, and especially white noise analysis. These proceedings are inspired by a conference held on the occasion of Prof. Ludwig Streit's 75th birthday and celebrate his pioneering and ongoing work in these fields.
The field of sensory science, the perception science of the food industry, increasingly requires a working knowledge of statistics for the evaluation of data. However, most sensory scientists are not also expert statisticians. This highly readable book presents complex statistical tools such as Anova in a way that is easily understood by the practising sensory scientist. In Analysis of Variance for sensory Data, written jointly by statisticians and food scientists, the reader is taken by the hand and guided through tests such as Anova. Using real examples from the food industry, practical implications are stressed rather than the theoretical background. The result of this is that the reader will be able to apply advanced Anova teqhniques to a variety of problems and learn how to interpret the results. The book is intended as a workbook for all students of sensory analysis who would gain from a knowledge of statistical techniques.
Abstract In this chapter we provide an overview of probabilistic logic networks (PLN), including our motivations for developing PLN and the guiding principles underlying PLN. We discuss foundational choices we made, introduce PLN knowledge representation, and briefly introduce inference rules and truth-values. We also place PLN in context with other approaches to uncertain inference. 1.1 Motivations This book presents Probabilistic Logic Networks (PLN), a systematic and pragmatic framework for computationally carrying out uncertain reasoning - r- soning about uncertain data, and/or reasoning involving uncertain conclusions. We begin with a few comments about why we believe this is such an interesting and important domain of investigation. First of all, we hold to a philosophical perspective in which "reasoning" - properly understood - plays a central role in cognitive activity. We realize that other perspectives exist; in particular, logical reasoning is sometimes construed as a special kind of cognition that humans carry out only occasionally, as a deviation from their usual (intuitive, emotional, pragmatic, sensorimotor, etc.) modes of thought. However, we consider this alternative view to be valid only according to a very limited definition of "logic." Construed properly, we suggest, logical reasoning may be understood as the basic framework underlying all forms of cognition, including those conventionally thought of as illogical and irrational.
This sequel to volume 19 of Handbook on Statistics on Stochastic Processes: Modelling and Simulation is concerned mainly with the theme of reviewing and, in some cases, unifying with new ideas the different lines of research and developments in stochastic processes of applied flavour. This volume consists of 23 chapters addressing various topics in stochastic processes. These include, among others, those on manufacturing systems, random graphs, reliability, epidemic modelling, self-similar processes, empirical processes, time series models, extreme value therapy, applications of Markov chains, modelling with Monte Carlo techniques, and stochastic processes in subjects such as engineering, telecommunications, biology, astronomy and chemistry. particular with modelling, simulation techniques and numerical methods concerned with stochastic processes. The scope of the project involving this volume as well as volume 19 is already clarified in the preface of volume 19. The present volume completes the aim of the project and should serve as an aid to students, teachers, researchers and practitioners interested in applied stochastic processes.
This monograph is intended to be a complete treatment of the metrical the ory of the (regular) continued fraction expansion and related representations of real numbers. We have attempted to give the best possible results known so far, with proofs which are the simplest and most direct. The book has had a long gestation period because we first decided to write it in March 1994. This gave us the possibility of essentially improving the initial versions of many parts of it. Even if the two authors are different in style and approach, every effort has been made to hide the differences. Let 0 denote the set of irrationals in I = [0,1]. Define the (reg ular) continued fraction transformation T by T (w) = fractional part of n 1/w, w E O. Write T for the nth iterate of T, n E N = {O, 1, ... }, n 1 with TO = identity map. The positive integers an(w) = al(T - (W)), n E N+ = {1,2*** }, where al(w) = integer part of 1/w, w E 0, are called the (regular continued fraction) digits of w. Writing . for arbitrary indeterminates Xi, 1 :::; i :::; n, we have w = lim [al(w),*** , an(w)], w E 0, n--->oo thus explaining the name of T. The above equation will be also written as w = lim [al(w), a2(w),***], w E O.
Econometric theory, as presented in textbooks and the econometric literature generally, is a somewhat disparate collection of findings. Its essential nature is to be a set of demonstrated results that increase over time, each logically based on a specific set of axioms or assumptions, yet at every moment, rather than a finished work, these inevitably form an incomplete body of knowledge. The practice of econometric theory consists of selecting from, applying, and evaluating this literature, so as to test its applicability and range. The creation, development, and use of computer software has led applied economic research into a new age. This book describes the history of econometric computation from 1950 to the present day, based upon an interactive survey involving the collaboration of the many econometricians who have designed and developed this software. It identifies each of the econometric software packages that are made available to and used by economists and econometricians worldwide.
The book will serve as a guide to many actuarial concepts and statistical techniques in multiple decrement models and their application in calculation of premiums and reserves in life insurance products with riders and in pension and employee benefit plans as in these schemes, the benefit paid on termination of employment depends upon the several causes of termination. Multiple state models are discussed to accommodate the insurance products in which the payment of benefits or premiums is dependent on being in a given state or moving between a given pair of states at a given time, for example, disability income insurance model. The book also discusses stochastic models for interest rates and calculation of premiums for some products in this set up. The highlight of the book is usage of R software, freely available from public domain, for computations of various monetary functions involved in insurance business. R commands are given for all the computations."
During the past decade, geneticists have cloned scores of Mendelian disease genes and constructed a rough draft of the entire human genome. The unprecedented insights into human disease and evolution offered by mapping, cloning, and sequencing will transform medicine and agriculture. This revolution depends vitally on the contributions of applied mathematicians, statisticians, and computer scientists. Mathematical and Statistical Methods for Genetic Analysis is written to equip students in the mathematical sciences to understand and model the epidemiological and experimental data encountered in genetics research. Mathematical, statistical, and computational principles relevant to this task are developed hand in hand with applications to population genetics, gene mapping, risk prediction, testing of epidemiological hypotheses, molecular evolution, and DNA sequence analysis. Many specialized topics are covered that are currently accessible only in journal articles. This second edition expands the original edition by over 100 pages and includes new material on DNA sequence analysis, diffusion processes, binding domain identification, Bayesian estimation of haplotype frequencies, case-control association studies, the gamete competition model, QTL mapping and factor analysis, the Lander-Green-Kruglyak algorithm of pedigree analysis, and codon and rate variation models in molecular phylogeny. Sprinkled throughout the chapters are many new problems. Kenneth Lange is Professor of Biomathematics and Human Genetics at the UCLA School of Medicine. At various times during his career, he has held appointments at the University of New Hampshire, MIT, Harvard, and the University of Michigan. While at the University of Michigan, he was the Pharmacia & Upjohn Foundation Professor of Biostatistics. His research interests include human genetics, population modeling, biomedical imaging, computational statistics, and applied stochastic processes. Springer-Verlag published his book Numerical Analysis for Statisticians in 1999.
Queueing network models have been widely applied as a powerful tool for modelling, performance evaluation, and prediction of discrete flow systems, such as computer systems, communication networks, production lines, and manufacturing systems. Queueing network models with finite capacity queues and blocking have been introduced and applied as even more realistic models of systems with finite capacity resources and with population constraints. In recent years, research in this field has grown rapidly. Analysis of Queueing Networks with Blocking introduces queueing network models with finite capacity and various types of blocking mechanisms. It gives a comprehensive definition of the analytical model underlying these blocking queueing networks. It surveys exact and approximate analytical solution methods and algorithms and their relevant properties. It also presents various application examples of queueing networks to model computer systems and communication networks. This book is organized in three parts. Part I introduces queueing networks with blocking and various application examples. Part II deals with exact and approximate analysis of queueing networks with blocking and the condition under which the various techniques can be applied. Part III presents a review of various properties of networks with blocking, describing several equivalence properties both between networks with and without blocking and between different blocking types. Approximate solution methods for the buffer allocation problem are presented.
The main focus of this book is on different topics in probability theory, partial differential equations and kinetic theory, presenting some of the latest developments in these fields. It addresses mathematical problems concerning applications in physics, engineering, chemistry and biology that were presented at the Third International Conference on Particle Systems and Partial Differential Equations, held at the University of Minho, Braga, Portugal in December 2014. The purpose of the conference was to bring together prominent researchers working in the fields of particle systems and partial differential equations, providing a venue for them to present their latest findings and discuss their areas of expertise. Further, it was intended to introduce a vast and varied public, including young researchers, to the subject of interacting particle systems, its underlying motivation, and its relation to partial differential equations. This book will appeal to probabilists, analysts and those mathematicians whose work involves topics in mathematical physics, stochastic processes and differential equations in general, as well as those physicists whose work centers on statistical mechanics and kinetic theory.
This book presents the refereed proceedings of the Twelfth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at Stanford University (California) in August 2016. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising in particular, in finance, statistics, computer graphics and the solution of PDEs.
The title High Dimensional Probability is an attempt to describe the many trib utaries of research on Gaussian processes and probability in Banach spaces that started in the early 1970's. In each of these fields it is necessary to consider large classes of stochastic processes under minimal conditions. There are rewards in re search of this sort. One can often gain deep insights, even about familiar processes, by stripping away details that in hindsight turn out to be extraneous. Many of the problems that motivated researchers in the 1970's were solved. But the powerful new tools created for their solution, such as randomization, isoperimetry, concentration of measure, moment and exponential inequalities, chaining, series representations and decoupling turned out to be applicable to other important areas of probability. They led to significant advances in the study of empirical processes and other topics in theoretical statistics and to a new ap proach to the study of aspects of Levy processes and Markov processes in general. Papers on these topics as well as on the continuing study of Gaussian processes and probability in Banach are included in this volume."
This book focuses on metaheuristic methods and its applications to real-world problems in Engineering. The first part describes some key metaheuristic methods, such as Bat Algorithms, Particle Swarm Optimization, Differential Evolution, and Particle Collision Algorithms. Improved versions of these methods and strategies for parameter tuning are also presented, both of which are essential for the practical use of these important computational tools. The second part then applies metaheuristics to problems, mainly in Civil, Mechanical, Chemical, Electrical, and Nuclear Engineering. Other methods, such as the Flower Pollination Algorithm, Symbiotic Organisms Search, Cross-Entropy Algorithm, Artificial Bee Colonies, Population-Based Incremental Learning, Cuckoo Search, and Genetic Algorithms, are also presented. The book is rounded out by recently developed strategies, or hybrid improved versions of existing methods, such as the Lightning Optimization Algorithm, Differential Evolution with Particle Collisions, and Ant Colony Optimization with Dispersion - state-of-the-art approaches for the application of computational intelligence to engineering problems. The wide variety of methods and applications, as well as the original results to problems of practical engineering interest, represent the primary differentiation and distinctive quality of this book. Furthermore, it gathers contributions by authors from four countries - some of which are the original proponents of the methods presented - and 18 research centers around the globe.
This book investigates the mathematical analysis of biological invasions. Unlike purely qualitative treatments of ecology, it draws on mathematical theory and methods, equipping the reader with sharp tools and rigorous methodology. Subjects include invasion dynamics, species interactions, population spread, long-distance dispersal, stochastic effects, risk analysis, and optimal responses to invaders. While based on the theory of dynamical systems, including partial differential equations and integrodifference equations, the book also draws on information theory, machine learning, Monte Carlo methods, optimal control, statistics, and stochastic processes. Applications to real biological invasions are included throughout. Ultimately, the book imparts a powerful principle: that by bringing ecology and mathematics together, researchers can uncover new understanding of, and effective response strategies to, biological invasions. It is suitable for graduate students and established researchers in mathematical ecology.
This new edition offers a comprehensive introduction to the analysis of data using Bayes rule. It generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. This is particularly useful when the observed parameter is barely above the background or the histogram of multiparametric data contains many empty bins, so that the determination of the validity of a theory cannot be based on the chi-squared-criterion. In addition to the solutions of practical problems, this approach provides an epistemic insight: the logic of quantum mechanics is obtained as the logic of unbiased inference from counting data. New sections feature factorizing parameters, commuting parameters, observables in quantum mechanics, the art of fitting with coherent and with incoherent alternatives and fitting with multinomial distribution. Additional problems and examples help deepen the knowledge. Requiring no knowledge of quantum mechanics, the book is written on introductory level, with many examples and exercises, for advanced undergraduate and graduate students in the physical sciences, planning to, or working in, fields such as medical physics, nuclear physics, quantum mechanics, and chaos.
Expert testimony relying on scientific and other specialized evidence has come under increased scrutiny by the legal system. A trilogy of recent U.S. Supreme Court cases has assigned judges the task of assessing the relevance and reliability of proposed expert testimony. In conjunction with the Federal judiciary, the American Association for the Advancement of Science has initiated a project to provide judges indicating a need with their own expert. This concern with the proper interpretation of scientific evidence, especially that of a probabilistic nature, has also occurred in England, Australia and in several European countries. Statistical Science in the Courtroom is a collection of articles written by statisticians and legal scholars who have been concerned with problems arising in the use of statistical evidence. A number of articles describe DNA evidence and the difficulties of properly calculating the probability that a random individual's profile would "match" that of the evidence as well as the proper way to intrepret the result. In addition to the technical issues, several authors tell about their experiences in court. A few have become disenchanted with their involvement and describe the events that led them to devote less time to this application. Other articles describe the role of statistical evidence in cases concerning discrimination against minorities, product liability, environmental regulation, the appropriateness and fairness of sentences and how being involved in legal statistics has raised interesting statistical problems requiring further research.
The theory of stochastic processes, for science and engineering, can be considered as an extension of probability theory allowing modeling of the evolution of systems over time. The modern theory of Markov processes has its origins in the studies of A.A. Markov (1856-1922) on sequences of experiments "connected in a chain" and in the attempts to describe mathematically the physical phenomenon Brownian motion. The theory of stochastic processes entered in a period of intensive development when the idea of Markov property was brought in. This book is a modern overall view of semi-Markov processes and its applications in reliability. It is accessible to readers with a first course in Probability theory (including the basic notions of Markov chain). The text contains many examples which aid in the understanding of the theoretical notions and shows how to apply them to concrete physical situations including algorithmic simulations. Many examples of the concrete applications in reliability are given. Features: * Processes associated to semi-Markov kernel for general and discrete state spaces * Asymptotic theory of processes and of additive functionals * Statistical estimation of semi-Markov kernel and of reliability function * Monte Carlo simulation * Applications in reliability and maintenance The book is a valuable resource for understanding the latest developments in Semi-Markov Processes and reliability. Practitioners, researchers and professionals in applied mathematics, control and engineering who work in areas of reliability, lifetime data analysis, statistics, probability, and engineering will find this book an up-to-date overview of the field.
Info-metrics is the science of modeling, reasoning, and drawing inferences under conditions of noisy and insufficient information. It is at the intersection of information theory, statistical inference, and decision-making under uncertainty. It plays an important role in helping make informed decisions even when there is inadequate or incomplete information because it provides a framework to process available information with minimal reliance on assumptions that cannot be validated. In this pioneering book, Amos Golan, a leader in info-metrics, focuses on unifying information processing, modeling and inference within a single constrained optimization framework. Foundations of Info-Metrics provides an overview of modeling and inference, rather than a problem specific model, and progresses from the simple premise that information is often insufficient to provide a unique answer for decisions we wish to make. Each decision, or solution, is derived from the available input information along with a choice of inferential procedure. The book contains numerous multidisciplinary applications and case studies, which demonstrate the simplicity and generality of the framework in real world settings. Examples include initial diagnosis at an emergency room, optimal dose decisions, election forecasting, network and information aggregation, weather pattern analyses, portfolio allocation, strategy inference for interacting entities, incorporation of prior information, option pricing, and modeling an interacting social system. Graphical representations illustrate how results can be visualized while exercises and problem sets facilitate extensions. This book is this designed to be accessible for researchers, graduate students, and practitioners across the disciplines.
This book provides an example of a thorough statistical treatment of ocean wave data in space and time. It demonstrates how the flexible framework of Bayesian hierarchical space-time models can be applied to oceanographic processes such as significant wave height in order to describe dependence structures and uncertainties in the data. This monograph is a research book and it is partly cross-disciplinary. The methodology itself is firmly rooted in the statistical research tradition, based on probability theory and stochastic processes. However, that methodology has been applied to a problem in the field of physical oceanography, analyzing data for significant wave height, which is of crucial importance to ocean engineering disciplines. Indeed, the statistical properties of significant wave height are important for the design, construction and operation of ships and other marine and coastal structures. Furthermore, the book addresses the question of whether climate change has an effect of the ocean wave climate, and if so what that effect might be. Thus, this book is an important contribution to the ongoing debate on climate change, its implications and how to adapt to a changing climate, with a particular focus on the maritime industries and the marine environment. This book should be of value to anyone with an interest in the statistical modelling of environmental processes, and in particular to those with an interest in the ocean wave climate. It is written on a level that should be understandable to everyone with a basic background in statistics or elementary mathematics, and an introduction to some basic concepts is provided in the appendices for the uninitiated reader. The intended readership includes students and professionals involved in statistics, oceanography, ocean engineering, environmental research, climate sciences and risk assessment. Moreover, the book s findings are relevant for various stakeholders in the maritime industries such as design offices, classification societies, ship owners, yards and operators, flag states and intergovernmental agencies such as the IMO."
This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12-16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statistical methods. The aim of the ICORS conference, which is being organized annually since 2001, is to bring together researchers interested in robust statistics, data analysis and related areas. The conference is meant for theoretical and applied statisticians, data analysts from other fields, leading experts, junior researchers and graduate students. The ICORS meetings offer a forum for discussing recent advances and emerging ideas in statistics with a focus on robustness, and encourage informal contacts and discussions among all the participants. They also play an important role in maintaining a cohesive group of international researchers interested in robust statistics and related topics, whose interactions transcend the meetings and endure year round. |
You may like...
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Fourteenth Annual Report of the Bureau…
Charles J Schonfarber Fox
Hardcover
R982
Discovery Miles 9 820
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
|