Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book presents the foundations of key problems in computational molecular biology and bioinformatics. It focuses on computational and statistical principles applied to genomes, and introduces the mathematics and statistics that are crucial for understanding these applications. The book features a free download of the R software statistics package and the text provides great crossover material that is interesting and accessible to students in biology, mathematics, statistics and computer science. More than 100 illustrations and diagrams reinforce concepts and present key results from the primary literature. Exercises are given at the end of chapters.
Markov random field (MRF) theory provides a basis for modeling contextual constraints in visual processing and interpretation. It enables us to develop optimal vision algorithms systematically when used with optimization principles. This book presents a comprehensive study on the use of MRFs for solving computer vision problems. Various vision models are presented in a unified framework, including image restoration and reconstruction, edge and region segmentation, texture, stereo and motion, object matching and recognition, and pose estimation. This third edition includes the most recent advances and has new and expanded sections on topics such as: Bayesian Network; Discriminative Random Fields; Strong Random Fields; Spatial-Temporal Models; Learning MRF for Classification. This book is an excellent reference for researchers working in computer vision, image processing, statistical pattern recognition and applications of MRFs. It is also suitable as a text for advanced courses in these areas.
This book offers comprehensive information on the theory, models and algorithms involved in state-of-the-art multivariate time series analysis and highlights several of the latest research advances in climate and environmental science. The main topics addressed include Multivariate Time-Frequency Analysis, Artificial Neural Networks, Stochastic Modeling and Optimization, Spectral Analysis, Global Climate Change, Regional Climate Change, Ecosystem and Carbon Cycle, Paleoclimate, and Strategies for Climate Change Mitigation. The self-contained guide will be of great value to researchers and advanced students from a wide range of disciplines: those from Meteorology, Climatology, Oceanography, the Earth Sciences and Environmental Science will be introduced to various advanced tools for analyzing multivariate data, greatly facilitating their research, while those from Applied Mathematics, Statistics, Physics, and the Computer Sciences will learn how to use these multivariate time series analysis tools to approach climate and environmental topics.
Nonlinear models have been used extensively in the areas of economics and finance. Recent literature on the topic has shown that a large number of series exhibit nonlinear dynamics as opposed to the alternative--linear dynamics. Incorporating these concepts involves deriving and estimating nonlinear time series models, and these have typically taken the form of Threshold Autoregression (TAR) models, Exponential Smooth Transition (ESTAR) models, and Markov Switching (MS) models, among several others. This edited volume provides a timely overview of nonlinear estimation techniques, offering new methods and insights into nonlinear time series analysis. It features cutting-edge research from leading academics in economics, finance, and business management, and will focus on such topics as Zero-Information-Limit-Conditions, using Markov Switching Models to analyze economics series, and how best to distinguish between competing nonlinear models. Principles and techniques in this book will appeal to econometricians, finance professors teaching quantitative finance, researchers, and graduate students interested in learning how to apply advances in nonlinear time series modeling to solve complex problems in economics and finance.
This book presents the state of the art in multilevel analysis, with an emphasis on more advanced topics. These topics are discussed conceptually, analyzed mathematically, and illustrated by empirical examples. Multilevel analysis is the statistical analysis of hierarchically and non-hierarchically nested data. The simplest example is clustered data, such as a sample of students clustered within schools. Multilevel data are especially prevalent in the social and behavioral sciences and in the biomedical sciences. The chapter authors are all leading experts in the field. Given the omnipresence of multilevel data in the social, behavioral, and biomedical sciences, this book is essential for empirical researchers in these fields.
For upper-level to graduate courses in Probability or Probability and Statistics, for majors in mathematics, statistics, engineering, and the sciences. Explores both the mathematics and the many potential applications of probability theory A First Course in Probability offers an elementary introduction to the theory of probability for students in mathematics, statistics, engineering, and the sciences. Through clear and intuitive explanations, it attempts to present not only the mathematics of probability theory, but also the many diverse possible applications of this subject through numerous examples. The 10th Edition includes many new and updated problems, exercises, and text material chosen both for inherent interest and for use in building student intuition about probability.
This book provides a self-contained review of all the relevant topics in probability theory. A software package called MAXIM, which runs on MATLAB, is made available for downloading. Vidyadhar G. Kulkarni is Professor of Operations Research at the University of North Carolina at Chapel Hill.
Elements of Large Sample Theory provides a unified treatment of first-order large-sample theory. It discusses a broad range of applications including introductions to density estimation, the bootstrap, and the asymptotics of survey methodology written at an elementary level. The book is suitable for students at the Master's level in statistics and in aplied fields who have a background of two years of calculus. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands, and the University of Chicago. Also available: E.L. Lehmann and George Casella, Theory at Point Estimation, Second Edition. Springer-Verlag New York, Inc., 1998, 640 pp., Cloth, ISBN 0-387-98502-6. E.L. Lehmann, Testing Statistical Hypotheses, Second Edition. Springer-Verlag New York, Inc., 1997, 624 pp., Cloth, ISBN 0-387-94919-4.
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, control, and finance.
This book presents the refereed proceedings of the Twelfth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at Stanford University (California) in August 2016. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising in particular, in finance, statistics, computer graphics and the solution of PDEs.
In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival analysis. These topics, when linked with utility theory, constitute the science base of risk analysis.
The Mathieu series is a functional series introduced by Emile Leonard Mathieu for the purposes of his research on the elasticity of solid bodies. Bounds for this series are needed for solving biharmonic equations in a rectangular domain. In addition to Tomovski and his coauthors, Pogany, Cerone, H. M. Srivastava, J. Choi, etc. are some of the known authors who published results concerning the Mathieu series, its generalizations and their alternating variants. Applications of these results are given in classical, harmonic and numerical analysis, analytical number theory, special functions, mathematical physics, probability, quantum field theory, quantum physics, etc. Integral representations, analytical inequalities, asymptotic expansions and behaviors of some classes of Mathieu series are presented in this book. A systematic study of probability density functions and probability distributions associated with the Mathieu series, its generalizations and Planck's distribution is also presented. The book is addressed at graduate and PhD students and researchers in mathematics and physics who are interested in special functions, inequalities and probability distributions.
Selected papers submitted by participants of the international Conference "Stochastic Analysis and Applied Probability 2010" ( www.saap2010.org ) make up the basis of this volume. The SAAP 2010 was held in Tunisia, from 7-9 October, 2010, and was organized by the "Applied Mathematics & Mathematical Physics" research unit of the preparatory institute to the military academies of Sousse (Tunisia), chaired by Mounir Zili. The papers cover theoretical, numerical and applied aspects of stochastic processes and stochastic differential equations. The study of such topic is motivated in part by the need to model, understand, forecast and control the behavior of many natural phenomena that evolve in time in a random way. Such phenomena appear in the fields of finance, telecommunications, economics, biology, geology, demography, physics, chemistry, signal processing and modern control theory, to mention just a few. As this book emphasizes the importance of numerical and theoretical studies of the stochastic differential equations and stochastic processes, it will be useful for a wide spectrum of researchers in applied probability, stochastic numerical and theoretical analysis and statistics, as well as for graduate students. To make it more complete and accessible for graduate students, practitioners and researchers, the editors Mounir Zili and Daria Filatova have included a survey dedicated to the basic concepts of numerical analysis of the stochastic differential equations, written by Henri Schurz.
This text is an accessible, student-friendly introduction to the wide range of mathematical and statistical tools needed by the forensic scientist in the analysis, interpretation and presentation of experimental measurements. From a basis of high school mathematics, the book develops essential quantitative analysis techniques within the context of a broad range of forensic applications. This clearly structured text focuses on developing core mathematical skills together with an understanding of the calculations associated with the analysis of experimental work, including an emphasis on the use of graphs and the evaluation of uncertainties. Through a broad study of probability and statistics, the reader is led ultimately to the use of Bayesian approaches to the evaluation of evidence within the court. In every section, forensic applications such as ballistics trajectories, post-mortem cooling, aspects of forensic pharmacokinetics, the matching of glass evidence, the formation of bloodstains and the interpretation of DNA profiles are discussed and examples of calculations are worked through. In every chapter there are numerous self-assessment problems to aid student learning. Its broad scope and forensically focused coverage make this book an essential text for students embarking on any degree course in forensic science or forensic analysis, as well as an invaluable reference for post-graduate students and forensic professionals. Key features: Offers a unique mix of mathematics and statistics topics, specifically tailored to a forensic science undergraduate degree.All topics illustrated with examples from the forensic science discipline.Written in an accessible, student-friendly way to engage interest and enhance learning and confidence.Assumes only a basic high-school level prior mathematical knowledge.
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
The book presents contributions on statistical models and methods applied, for both data science and SDGs, in one place. Measuring and controlling data of SDGs, data driven measurement of progress needs to be distributed to stakeholders. In this situation, the techniques used in data science, specially, in the big data analytics, play an important role rather than the traditional data gathering and manipulation techniques. This book fills this space through its twenty contributions. The contributions have been selected from those presented during the 7th International Conference on Data Science and Sustainable Development Goals organized by the Department of Statistics, University of Rajshahi, Bangladesh; and cover topics mainly on SDGs, bioinformatics, public health, medical informatics, environmental statistics, data science and machine learning. The contents of the volume would be useful to policymakers, researchers, government entities, civil society, and nonprofit organizations for monitoring and accelerating the progress of SDGs.
This graduate textbook covers topics in statistical theory essential for graduate students preparing for work on a Ph.D. degree in statistics. The first chapter provides a quick overview of concepts and results in measure-theoretic probability theory that are usefulin statistics. The second chapter introduces some fundamental concepts in statistical decision theory and inference. Chapters 3-7 contain detailed studies on some important topics: unbiased estimation, parametric estimation, nonparametric estimation, hypothesis testing, and confidence sets. A large number of exercises in each chapter provide not only practice problems for students, but also many additional results. In addition to the classical results that are typically covered in a textbook of a similar level, this book introduces some topics in modern statistical theory that have been developed in recent years, such as Markov chain Monte Carlo, quasi-likelihoods, empirical likelihoods, statistical functionals, generalized estimation equations, the jackknife, and the bootstrap. Jun Shao is Professor of Statistics at the University of Wisconsin, Madison. Also available: Jun Shao and Dongsheng Tu, The Jackknife and Bootstrap, Springer- Verlag New York, Inc., 1995, Cloth, 536 pp., 0-387-94515-6.
Most data sets collected by researchers are multivariate, and in the majority of cases the variables need to be examined simultaneously to get the most informative results. This requires the use of one or other of the many methods of multivariate analysis, and the use of a suitable software package such as S-PLUS or R. In this book the core multivariate methodology is covered along with some basic theory for each method described. The necessary R and S-PLUS code is given for each analysis in the book, with any differences between the two highlighted. A website with all the datasets and code used in the book can be found at http: //biostatistics.iop.kcl.ac.uk/publications/everitt/. Graduate students, and advanced undergraduates on applied statistics courses, especially those in the social sciences, will find this book invaluable in their work, and it will also be useful to researchers outside of statistics who need to deal with the complexities of multivariate data in their work. Brian Everitt is Emeritus Professor of Statistics, Kinga (TM)s College, London.
This volume presents a collection of papers covering applications from a wide range of systems with infinitely many degrees of freedom studied using techniques from stochastic and infinite dimensional analysis, e.g. Feynman path integrals, the statistical mechanics of polymer chains, complex networks, and quantum field theory. Systems of infinitely many degrees of freedom create their particular mathematical challenges which have been addressed by different mathematical theories, namely in the theories of stochastic processes, Malliavin calculus, and especially white noise analysis. These proceedings are inspired by a conference held on the occasion of Prof. Ludwig Streit's 75th birthday and celebrate his pioneering and ongoing work in these fields.
Most of the time series analysis methods applied today rely heavily on the key assumptions of linearity, Gaussianity and stationarity. Natural time series, including hydrologic, climatic and environmental time series, which satisfy these assumptions seem to be the exception rather than the rule. Nevertheless, most time series analysis is performed using standard methods after relaxing the required conditions one way or another, in the hope that the departure from these assumptions is not large enough to affect the result of the analysis. A large amount of data is available today after almost a century of intensive data collection of various natural time series. In addition to a few older data series such as sunspot numbers, sea surface temperatures, etc., data obtained through dating techniques (tree-ring data, ice core data, geological and marine deposits, etc.), are available. With the advent of powerful computers, the use of simplified methods can no longer be justified, especially with the success of these methods in explaining the inherent variability in natural time series. This book presents a number of new techniques that have been discussed in the literature during the last two decades concerning the investigation of stationarity, linearity and Gaussianity of hydrologic and environmental times series. These techniques cover different approaches for assessing nonstationarity, ranging from time domain analysis, to frequency domain analysis, to the combined time-frequency and time-scale analyses, to segmentation analysis, in addition to formal statistical tests of linearity and Gaussianity. It is hoped that this endeavor would facilitate further research into this important area.
Considering Poisson random measures as the driving sources for stochastic (partial) differential equations allows us to incorporate jumps and to model sudden, unexpected phenomena. By using such equations the present book introduces a new method for modeling the states of complex systems perturbed by random sources over time, such as interest rates in financial markets or temperature distributions in a specific region. It studies properties of the solutions of the stochastic equations, observing the long-term behavior and the sensitivity of the solutions to changes in the initial data. The authors consider an integration theory of measurable and adapted processes in appropriate Banach spaces as well as the non-Gaussian case, whereas most of the literature only focuses on predictable settings in Hilbert spaces. The book is intended for graduate students and researchers in stochastic (partial) differential equations, mathematical finance and non-linear filtering and assumes a knowledge of the required integration theory, existence and uniqueness results and stability theory. The results will be of particular interest to natural scientists and the finance community. Readers should ideally be familiar with stochastic processes and probability theory in general, as well as functional analysis and in particular the theory of operator semigroups.
Computationally intensive methods have become widely used both for statistical inference and for exploratory analyses of data. The methods of computational statistics involve resampling, partitioning, and multiple transformations of a dataset. They may also make use of randomly generated artificial data. Implementation of these methods often requires advanced techniques in numerical analysis, so there is a close connection between computational statistics and statistical computing. This book describes techniques used in computational statistics, and addresses some areas of application of computationally intensive methods, such as density estimation, identification of structure in data, and model building. Although methods of statistical computing are not emphasized in this book, numerical techniques for transformations, for function approximation, and for optimization are explained in the context of the statistical methods. The book includes exercises, some with solutions. The book can be used as a text or supplementary text for various courses in modern statistics at the advanced undergraduate or graduate level, and it can also be used as a reference for statisticians who use computationally-intensive methods of analysis. Although some familiarity with probability and statistics is assumed, the book reviews basic methods of inference, and so is largely self-contained. James Gentle is University Professor of Computational Statistics at George Mason University. He is a Fellow of the American Statistical Association and a member of the International Statistical Institute. He has held several national offices in the American Statistical Association and has served as associate editor for journals of the ASA as well as for other journals in statistics and computing. He is the author of Random Number Generation and Monte Carlo Methods and Numerical Linear Algebra for Statistical Applications.
The book is meant to serve two purposes. The first and more obvious
one is to present state of the art results in algebraic research
into residuated structures related to substructural logics. The
second, less obvious but equally important, is to provide a
reasonably gentle introduction to algebraic logic. At the
beginning, the second objective is predominant. Thus, in the first
few chapters the reader will find a primer of universal algebra for
logicians, a crash course in nonclassical logics for algebraists,
an introduction to residuated structures, an outline of
Gentzen-style calculi as well as some titbits of proof theory - the
celebrated Hauptsatz, or cut elimination theorem, among them. These
lead naturally to a discussion of interconnections between logic
and algebra, where we try to demonstrate how they form two sides of
the same coin. We envisage that the initial chapters could be used
as a textbook for a graduate course, perhaps entitled Algebra and
Substructural Logics.
Mathematically, natural disasters of all types are characterized by heavy tailed distributions. The analysis of such distributions with common methods, such as averages and dispersions, can therefore lead to erroneous conclusions. The statistical methods described in this book avoid such pitfalls. Seismic disasters are studied, primarily thanks to the availability of an ample statistical database. New approaches are presented to seismic risk estimation and forecasting the damage caused by earthquakes, ranging from typical, moderate events to very rare, extreme disasters. Analysis of these latter events is based on the limit theorems of probability and the duality of the generalized Pareto distribution and generalized extreme value distribution. It is shown that the parameter most widely used to estimate seismic risk - Mmax, the maximum possible earthquake value - is potentially non-robust. Robust analogues of this parameter are suggested and calculated for some seismic catalogues. Trends in the costs inferred by damage from natural disasters as related to changing social and economic situations are examined for different regions. The results obtained argue for sustainable development, whereas entirely different, incorrect conclusions can be drawn if the specific properties of the heavy-tailed distribution and change in completeness of data on natural hazards are neglected. This pioneering work is directed at risk assessment specialists in general, seismologists, administrators and all those interested in natural disasters and their impact on society. |
You may like...
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R875
Discovery Miles 8 750
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Stats: Data and Models, Global Edition…
Richard De Veaux, Paul Velleman, …
Digital product license key
R1,516
Discovery Miles 15 160
Fatal Numbers: Why Count on Chance
Hans Magnus Enzensberger
Paperback
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,284
Discovery Miles 22 840
|