Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book covers the statistical models and methods that are used to understand human genetics, following the historical and recent developments of human genetics. Starting with Mendel's first experiments to genome-wide association studies, the book describes how genetic information can be incorporated into statistical models to discover disease genes. All commonly used approaches in statistical genetics (e.g. aggregation analysis, segregation, linkage analysis, etc), are used, but the focus of the book is modern approaches to association analysis. Numerous examples illustrate key points throughout the text, both of Mendelian and complex genetic disorders. The intended audience is statisticians, biostatisticians, epidemiologists and quantitatively- oriented geneticists and health scientists wanting to learn about statistical methods for genetic analysis, whether to better analyze genetic data, or to pursue research in methodology. A background in intermediate level statistical methods is required. The authors include few mathematical derivations, and the exercises provide problems for students with a broad range of skill levels. No background in genetics is assumed.
Introduction to Convolutional Codes with Applications is an introduction to the basic concepts of convolutional codes, their structure and classification, various error correction and decoding techniques for convolutionally encoded data, and some of the most common applications. The definition and representations, distance properties, and important classes of convolutional codes are also discussed in detail. The book provides the first comprehensive description of table-driven correction and decoding of convolutionally encoded data. Complete examples of Viterbi, sequential, and majority-logic decoding technique are also included, allowing a quick comparison among the different decoding approaches. Introduction to Convolutional Codes with Applications summarizes the research of the last two decades on applications of convolutional codes in hybrid ARQ protocols. A new classification allows a natural way of studying the underlying concepts of hybrid schemes and accommodates all of the new research. A novel application of fast decodable invertible convolutional codes for lost packet recovery in high speed networks is described. This opens the door for using convolutional coding for error recovery in high speed networks. Practicing communications, electronics, and networking engineers who want to get a better grasp of the underlying concepts of convolutional coding and its applications will greatly benefit by the simple and concise style of explanation. An up-to-date bibliography of over 300 papers is included. Also suitable for use as a textbook or a reference text in an advanced course on coding theory with emphasis on convolutional codes.
This book presents a forecasting mechanism of the price intervals for deriving the SCR (solvency capital requirement) eradicating the risk during the exercise period on one hand and measuring the risk by computing the hedging exit time function associating with smaller investments the date until which the value of the portfolio hedges the liabilities on the other. This information, summarized under the term "tychastic viability measure of risk" is an evolutionary alternative to statistical measures, when dealing with evolutions under uncertainty. The book is written by experts in the field and the target audience primarily comprises research experts and practitioners.
Growth-curve models are generalized multivariate analysis-of-variance models. The basic idea of the models is to use different polynomials to fit different treatment groups involved in the longitudinal study. It is not uncommon, however, to find outliers and influential observations in growth data that heavily affect statistical inference in growth curve models. This book provides a comprehensive introduction to the theory of growth curve models with an emphasis on statistical diagnostics. A variety of issues on model fittings and model diagnostics are addressed, and many criteria for outlier detection and influential observation identification are created within likelihood and Bayesian frameworks. This book is intended for postgraduates and statisticians whose research involves longitudinal study, multivariate analysis and statistical diagnostics, and also for scientists who analyze longitudinal data and repeated measures. The authors provide theoretical details on the model fittings and also emphasize the application of growth curve models to practical data analysis, which are reflected in the analysis of practical examples given in each chapter. The book assumes a basic knowledge of matrix algebra and linear regression. Jian-Xin Pan is a lecturer in Medical Statistics of Keele University in the U.K. He has published more than twenty papers on growth curve models, statistical diagnostics and linear/non-linear mixed models. He has a long-standing research interest in longitudinal data analysis and repeated measures in medicine and agriculture. Kai-Tai Fang is a chair professor in Statistics of Hong Kong Baptist University and a fellow of the Institute of Mathematical Statistics. He has published several books with Springer-Verlag, Chapman & Hall, and Science Press and is an author or co-author of over one hundred papers. His research interest includes generalized multivariate analysis, elliptically contoured distributions and uniform design.
The material of this book is based on several courses which have been delivered for a long time at the Moscow Institute for Physics and Technology. Some parts have formed the subject of lectures given at various universities throughout the world: Freie Universitat of Berlin, Chalmers University of Technology and the University of Goteborg, University of California at Santa Barbara and others. The subject of the book is the theory of queues. This theory, as a mathematical discipline, begins with the work of A. Erlang, who examined a model of a telephone station and obtained the famous formula for the distribution of the number of busy lines which is named after him. Queueing theory has been applied to the study of numerous models: emergency aid, road traffic, computer systems, etc. Besides, it has lead to several related disciplines such as reliability and inventory theories which deal with similar models. Nevertheless, many parts of the theory of queues were developed as a "pure science" with no practical applications. The aim of this book is to give the reader an insight into the mathematical methods which can be used in queueing theory and to present examples of solving problems with the help of these methods. Of course, the choice of the methods is quite subjective. Thus, many prominent results have not even been mentioned.
Collecting together twenty-three self-contained articles, this volume presents the current research of a number of renowned scientists in both probability theory and statistics as well as their various applications in economics, finance, the physics of wind-blown sand, queueing systems, risk assessment, turbulence and other areas. The contributions are dedicated to and inspired by the research of Ole E. Barndorff-Nielsen who, since the early 1960s, has been and continues to be a very active and influential researcher working on a wide range of important problems. The topics covered include, but are not limited to, econometrics, exponential families, Levy processes and infinitely divisible distributions, limit theory, mathematical finance, random matrices, risk assessment, statistical inference for stochastic processes, stochastic analysis and optimal control, time series, and turbulence. The book will be of interest to researchers and graduate students in probability, statistics and their applications.
This volume is intended to stimulate a change in the practice of decision support, advocating an interdisciplinary approach centred on both social and natural sciences, both theory and practice. It addresses the issue of analysis and management of uncertainty and risk in decision support corresponding to the aims of Integrated Assessment. A pluralistic method is necessary to account for legitimate plural interpretations of uncertainty and multiple risk perceptions. A wide range of methods and tools is presented to contribute to adequate and effective pluralistic uncertainty management and risk analysis in decision support endeavours. Special attention is given to the development of one such approach, the Pluralistic fRamework for Integrated uncertainty Management and risk Analysis (PRIMA), of which the practical value is explored in the context of the Environmental Outlooks produced by the Dutch Institute for Public Health and Environment (RIVM). Audience: This book will be of interest to researchers and practitioners whose work involves decision support, uncertainty management, risk analysis, environmental planning, and Integrated Assessment.
Probability Theory, Theory of Random Processes and Mathematical Statistics are important areas of modern mathematics and its applications. They develop rigorous models for a proper treatment for various 'random' phenomena which we encounter in the real world. They provide us with numerous tools for an analysis, prediction and, ultimately, control of random phenomena. Statistics itself helps with choice of a proper mathematical model (e.g., by estimation of unknown parameters) on the basis of statistical data collected by observations. This volume is intended to be a concise textbook for a graduate level course, with carefully selected topics representing the most important areas of modern Probability, Random Processes and Statistics. The first part (Ch. 1-3) can serve as a self-contained, elementary introduction to Probability, Random Processes and Statistics. It contains a number of relatively sim ple and typical examples of random phenomena which allow a natural introduction of general structures and methods. Only knowledge of elements of real/complex analysis, linear algebra and ordinary differential equations is required here. The second part (Ch. 4-6) provides a foundation of Stochastic Analysis, gives information on basic models of random processes and tools to study them. Here a familiarity with elements of functional analysis is necessary. Our intention to make this course fast-moving made it necessary to present important material in a form of examples."
This book illustrates numerous statistical practices that are commonly used by medical researchers, but which have severe flaws that may not be obvious. For each example, it provides one or more alternative statistical methods that avoid misleading or incorrect inferences being made. The technical level is kept to a minimum to make the book accessible to non-statisticians. At the same time, since many of the examples describe methods used routinely by medical statisticians with formal statistical training, the book appeals to a broad readership in the medical research community.
This book presents contributions and review articles on the theory of copulas and their applications. The authoritative and refereed contributions review the latest findings in the area with emphasis on "classical" topics like distributions with fixed marginals, measures of association, construction of copulas with given additional information, etc. The book celebrates the 75th birthday of Professor Roger B. Nelsen and his outstanding contribution to the development of copula theory. Most of the book's contributions were presented at the conference "Copulas and Their Applications" held in his honor in Almeria, Spain, July 3-5, 2017. The chapter 'When Gumbel met Galambos' is published open access under a CC BY 4.0 license.
This book surveys some of the important research work carried out by Indian scientists in the field of pure and applied probability, quantum probability, quantum scattering theory, group representation theory and general relativity. It reviews the axiomatic foundations of probability theory by A.N. Kolmogorov and how the Indian school of probabilists and statisticians used this theory effectively to study a host of applied probability and statistics problems like parameter estimation, convergence of a sequence of probability distributions, and martingale characterization of diffusions. It will be an important resource to students and researchers of Physics and Engineering, especially those working with Advanced Probability and Statistics.
The linear mixed model has become the main parametric tool for the analysis of continuous longitudinal data, as the authors discussed in their 2000 book. Without putting too much emphasis on software, the book shows how the different approaches can be implemented within the SAS software package. The authors received the American Statistical Association's Excellence in Continuing Education Award based on short courses on longitudinal and incomplete data at the Joint Statistical Meetings of 2002 and 2004.
This low-priced write-in workbook offers extensive practice, mostly context-free, for students to gain confidence in statistics at Level 3.
Advances in Stochastic Modelling and Data Analysis presents the most recent developments in the field, together with their applications, mainly in the areas of insurance, finance, forecasting and marketing. In addition, the possible interactions between data analysis, artificial intelligence, decision support systems and multicriteria analysis are examined by top researchers. Audience: A wide readership drawn from theoretical and applied mathematicians, such as operations researchers, management scientists, statisticians, computer scientists, bankers, marketing managers, forecasters, and scientific societies such as EURO and TIMS.
Algorithmic Principles of Mathematical Programming investigates the
mathematical structures and principles underlying the design of
efficient algorithms for optimization problems. Recent advances in
algorithmic theory have shown that the traditionally separate areas
of discrete optimization, linear programming, and nonlinear
optimization are closely linked. This book offers a comprehensive
introduction to the whole subject and leads the reader to the
frontiers of current research. The prerequisites to use the book
are very elementary. All the tools from numerical linear algebra
and calculus are fully reviewed and developed. Rather than
attempting to be encyclopedic, the book illustrates the important
basic techniques with typical problems. The focus is on efficient
algorithms with respect to practical usefulness. Algorithmic
complexity theory is presented with the goal of helping the reader
understand the concepts without having to become a theoretical
specialist. Further theory is outlined and supplemented with
pointers to the relevant literature.
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncertainty from the available data.
The focus of this book is on bilevel programming which combines elements of hierarchical optimization and game theory. The basic model addresses the problem where two decision-makers, each with their individual objectives, act and react in a noncooperative manner. The actions of one affect the choices and payoffs available to the other but neither player can completely dominate the other in the traditional sense. Over the last 20 years there has been a steady growth in research related to theory and solution methodologies for bilevel programming. This interest stems from the inherent complexity and consequent challenge of the underlying mathematics, as well as the applicability of the bilevel model to many real-world situations. The primary aim of this book is to provide a historical perspective on algorithmic development and to highlight those implementations that have proved to be the most efficient in their class. A corollary aim is to provide a sampling of applications in order to demonstrate the versatility of the basic model and the limitations of current technology. What is unique about this book is its comprehensive and integrated treatment of theory, algorithms and implementation issues. It is the first text that offers researchers and practitioners an elementary understanding of how to solve bilevel programs and a perspective on what success has been achieved in the field. Audience: Includes management scientists, operations researchers, industrial engineers, mathematicians and economists.
It appears that we live in an age of disasters: the mighty Missis sippi and Missouri flood millions of acres, earthquakes hit Tokyo and California, airplanes crash due to mechanical failure and the seemingly ever increasing wind speeds make the storms more and more frightening. While all these may seem to be unexpected phenomena to the man on the street, they are actually happening according to well defined rules of science known as extreme value theory. We know that records must be broken in the future, so if a flood design is based on the worst case of the past then we are not really prepared against floods. Materials will fail due to fatigue, so if the body of an aircraft looks fine to the naked eye, it might still suddenly fail if the aircraft has been in operation over an extended period of time. Our theory has by now penetrated the so cial sciences, the medical profession, economics and even astronomy. We believe that our field has come of age. In or er to fully utilize the great progress in the theory of extremes and its ever increasing acceptance in practice, an international conference was organized in which equal weight was given to theory and practice. This book is Volume I of the Proceedings of this conference. In selecting the papers for Volume lour guide was to have authoritative works with a large variety of coverage of both theory and practice."
The book is a collection of essays on various issues in philosophy of science, with special emphasis on the foundations of probability and statistics, and quantum mechanics. The main topics, addressed by some of the most outstanding researchers in the field, are subjective probability, Bayesian statistics, probability kinematics, causal decision making, probability and realism in quantum mechanics.
Written to be more rigorous than other books on the same topics No other book on the topics explores programming and software in this manner Requires only undergraduate prerequisites
This prospective book discusses conceptual and pragmatic issues in the assessment of statistical knowledge and reasoning skills and the use of assessments to improve instruction among students at college and pre-college levels. It is designed primarily for academic audiences involved in teaching statistics and mathematics, and in teacher education and training. The book is divided in four sections: (I) Assessment goals and frameworks, (2) Assessing conceptual understanding of statistical ideas, (3) Innovative models for classroom assessments, and (4) Assessing understanding of probability. Both editors are involved in assessment issues in statistics. The book is written by leading researchers, statistics math educators and curriculum developers.
Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. This book is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an appied regression course to graduate students. This book seves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to statistical methods and a thoeretical linear models course. This book emphasizes the concepts and the analysis of data sets. It provides a review of the key concepts in simple linear regression, matrix operations, and multiple regression. Methods and criteria for selecting regression variables and geometric interpretations are discussed. Polynomial, trigonometric, analysis of variance, nonlinear, time series, logistic, random effects, and mixed effects models are also discussed. Detailed case studies and exercises based on real data sets are used to reinforce the concepts. John O. Rawlings, Professor Emeritus in the Department of Statistics at North Carolina State University, retired after 34 years of teaching, consulting, and research in statistical methods. He was instrumental in developing, and for many years taught, the course on which this text is based. He is a Fellow of the American Statistical Association and the Crop Science Society of America. Sastry G. Pantula is Professor and Directory of Graduate Programs in the Department of Statistics at North Carolina State University. He is a member of the Academy of Outstanding Teachers at North Carolina State University. David A. Dickey is Professor of Statistics at North Carolina State University. He is a member of the Academy of Outstanding Teachers at North Carolina State University.
The dynamics of population systems cannot be understood within the
framework of ordinary differential equations, which assume that the
number of interacting agents is infinite. With recent advances in
ecology, biochemistry and genetics it is becoming increasingly
clear that real systems are in fact subject to a great deal of
noise. Relevant examples include social insects competing for
resources, molecules undergoing chemical reactions in a cell and a
pool of genomes subject to evolution.When the population size is
small, novel macroscopic phenomena can arise, which can be analyzed
using the theory of stochastic processes. This thesis is centered
on two unsolved problems in population dynamics: the symmetry
breaking observed in foraging populations and the robustness of
spatial patterns. We argue that these problems can be resolved with
the help of two novel concepts: noise-induced bistable states and
stochastic patterns. |
You may like...
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
R2,342 Discovery Miles 23 420
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R863
Discovery Miles 8 630
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R875
Discovery Miles 8 750
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
R2,178
Discovery Miles 21 780
|