![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This is a companion book to Asymptotic Analysis of Random Walks: Heavy-Tailed Distributions by A.A. Borovkov and K.A. Borovkov. Its self-contained systematic exposition provides a highly useful resource for academic researchers and professionals interested in applications of probability in statistics, ruin theory, and queuing theory. The large deviation principle for random walks was first established by the author in 1967, under the restrictive condition that the distribution tails decay faster than exponentially. (A close assertion was proved by S.R.S. Varadhan in 1966, but only in a rather special case.) Since then, the principle has always been treated in the literature only under this condition. Recently, the author jointly with A.A. Mogul'skii removed this restriction, finding a natural metric for which the large deviation principle for random walks holds without any conditions. This new version is presented in the book, as well as a new approach to studying large deviations in boundary crossing problems. Many results presented in the book, obtained by the author himself or jointly with co-authors, are appearing in a monograph for the first time.
In many statistical applications the scientists have to analyze the occurrence of observed clusters of events in time or space. The scientists are especially interested to determine whether an observed cluster of events has occurred by chance if it is assumed that the events are distributed independently and uniformly over time or space. Applications of scan statistics have been recorded in many areas of science and technology including: geology, geography, medicine, minefield detection, molecular biology, photography, quality control and reliability theory and radio-optics.
This volume, which is completely dedicated to continuous bivariate dist- butions, describes in detail their forms, properties, dependence structures, computation, and applications. It is a comprehensive and thorough revision ofanearliereditionof"ContinuousBivariateDistributions, Emphasizing- plications" by T.P. Hutchinson and C.D. Lai, published in 1990 by Rumsby Scienti?c Publishing, Adelaide, Australia. It has been nearly two decades since the publication of that book, and much has changed in this area of research during this period. Generali- tions have been considered for many known standard bivariate distributions. Skewed versions of di?erent bivariate distributions have been proposed and appliedtomodeldatawithskewnessdepartures.Byspecifyingthetwocon- tional distributions, rather than the simple speci?cation of one marginal and one conditional distribution, several general families of conditionally spe- ?ed bivariate distributions have been derived and studied at great length. Finally, bivariate distributions generated by a variety of copulas and their ?exibility (in terms of accommodating association/correlation) and str- tural properties have received considerable attention. All these developments andadvancesnecessitatedthepresentvolumeandhavethusresultedinas- stantially di?erent version than the last edition, both in terms of coverage and topics of discussion.
This proceedings volume gathers selected, peer-reviewed papers presented at the 41st International Conference on Infinite Dimensional Analysis, Quantum Probability and Related Topics (QP41) that was virtually held at the United Arab Emirates University (UAEU) in Al Ain, Abu Dhabi, from March 28th to April 1st, 2021. The works cover recent developments in quantum probability and infinite dimensional analysis, with a special focus on applications to mathematical physics and quantum information theory. Covered topics include white noise theory, quantum field theory, quantum Markov processes, free probability, interacting Fock spaces, and more. By emphasizing the interconnection and interdependence of such research topics and their real-life applications, this reputed conference has set itself as a distinguished forum to communicate and discuss new findings in truly relevant aspects of theoretical and applied mathematics, notably in the field of mathematical physics, as well as an event of choice for the promotion of mathematical applications that address the most relevant problems found in industry. That makes this volume a suitable reading not only for researchers and graduate students with an interest in the field but for practitioners as well.
Providing an easy explanation of the fundamentals, methods, and applications of chemometrics - Acts as a practical guide to multivariate data analysis techniques- Explains the methods used in Chemometrics and teaches the reader to perform all relevant calculations- Presents the basic chemometric methods as worksheet functions in Excel- Includes Chemometrics Add In for download which uses Microsoft Excel(R) for chemometrics training- Online downloads includes workbooks with examples
This book primarily aims to discuss emerging topics in statistical methods and to booster research, education, and training to advance statistical modeling on interval-censored survival data. Commonly collected from public health and biomedical research, among other sources, interval-censored survival data can easily be mistaken for typical right-censored survival data, which can result in erroneous statistical inference due to the complexity of this type of data. The book invites a group of internationally leading researchers to systematically discuss and explore the historical development of the associated methods and their computational implementations, as well as emerging topics related to interval-censored data. It covers a variety of topics, including univariate interval-censored data, multivariate interval-censored data, clustered interval-censored data, competing risk interval-censored data, data with interval-censored covariates, interval-censored data from electric medical records, and misclassified interval-censored data. Researchers, students, and practitioners can directly make use of the state-of-the-art methods covered in the book to tackle their problems in research, education, training and consultation.
The first derivative of a particle coordinate means its velocity, the second means its acceleration, but what does a fractional order derivative mean? Where does it come from, how does it work, where does it lead to? The two-volume book written on high didactic level answers these questions. Fractional Derivatives for Physicists and Engineers- The first volume contains a clear introduction into such a modern branch of analysis as the fractional calculus. The second develops a wide panorama of applications of the fractional calculus to various physical problems. This book recovers new perspectives in front of the reader dealing with turbulence and semiconductors, plasma and thermodynamics, mechanics and quantum optics, nanophysics and astrophysics. The book is addressed to students, engineers and physicists, specialists in theory of probability and statistics, in mathematical modeling and numerical simulations, to everybody who doesn't wish to stay apart from the new mathematical methods becoming more and more popular. Prof. Vladimir V. UCHAIKIN is a known Russian scientist and pedagogue, a Honored Worker of Russian High School, a member of the Russian Academy of Natural Sciences. He is the author of about three hundreds articles and more than a dozen books (mostly in Russian) in Cosmic ray physics, Mathematical physics, Levy stable statistics, Monte Carlo methods with applications to anomalous processes in complex systems of various levels: from quantum dots to the Milky Way galaxy.
This book focuses on the multi-omics big-data integration, the data-mining techniques and the cutting-edge omics researches in principles and applications for a deep understanding of Traditional Chinese Medicine (TCM) and diseases from the following aspects: (1) Basics about multi-omics data and analytical methods for TCM and diseases. (2) The needs of omics studies in TCM researches, and the basic background of omics research in TCM and disease. (3) Better understanding of the multi-omics big-data integration techniques. (4) Better understanding of the multi-omics big-data mining techniques, as well as with different applications, for most insights from these omics data for TCM and disease researches. (5) TCM preparation quality control for checking both prescribed and unexpected ingredients including biological and chemical ingredients. (6) TCM preparation source tracking. (7) TCM preparation network pharmacology analysis. (8) TCM analysis data resources, web services, and visualizations. (9) TCM geoherbalism examination and authentic TCM identification. Traditional Chinese Medicine has been in existence for several thousands of years, and only in recent tens of years have we realized that the researches on TCM could be profoundly boosted by the omics technologies. Devised as a book on TCM and disease researches in the omics age, this book has put the focus on data integration and data mining methods for multi-omics researches, which will be explained in detail and with supportive examples the "What", "Why" and "How" of omics on TCM related researches. It is an attempt to bridge the gap between TCM related multi-omics big data, and the data-mining techniques, for best practice of contemporary bioinformatics and in-depth insights on the TCM related questions.
More data has been produced in the 21st century than all of human history combined. Yet, are we making better decisions today than in the past? How many poor decisions result from the absence of data? The existence of an overwhelming amount of data has affected how we make decisions, but it has not necessarily improved how we make decisions. To make better decisions, people need good judgment based on data literacy-the ability to extract meaning from data. Including data in the decision-making process can bring considerable clarity in answering our questions. Nevertheless, human beings can become distracted, overwhelmed, and even confused in the presence of too much data. The book presents cautionary tales of what can happen when too much attention is spent on acquiring more data instead of understanding how to best use the data we already have. Data is not produced in a vacuum, and individuals who possess data literacy will understand the environment and incentives in the data-generating process. Readers of this book will learn what questions to ask, what data to pay attention to, and what pitfalls to avoid in order to make better decisions. They will also be less vulnerable to those who manipulate data for misleading purposes.
Now in its second edition, this textbook provides an applied and unified introduction to parametric, nonparametric and semiparametric regression that closes the gap between theory and application. The most important models and methods in regression are presented on a solid formal basis, and their appropriate application is shown through numerous examples and case studies. The most important definitions and statements are concisely summarized in boxes, and the underlying data sets and code are available online on the book's dedicated website. Availability of (user-friendly) software has been a major criterion for the methods selected and presented. The chapters address the classical linear model and its extensions, generalized linear models, categorical regression models, mixed models, nonparametric regression, structured additive regression, quantile regression and distributional regression models. Two appendices describe the required matrix algebra, as well as elements of probability calculus and statistical inference. In this substantially revised and updated new edition the overview on regression models has been extended, and now includes the relation between regression models and machine learning, additional details on statistical inference in structured additive regression models have been added and a completely reworked chapter augments the presentation of quantile regression with a comprehensive introduction to distributional regression models. Regularization approaches are now more extensively discussed in most chapters of the book. The book primarily targets an audience that includes students, teachers and practitioners in social, economic, and life sciences, as well as students and teachers in statistics programs, and mathematicians and computer scientists with interests in statistical modeling and data analysis. It is written at an intermediate mathematical level and assumes only knowledge of basic probability, calculus, matrix algebra and statistics.
Intangible, invisible and worth trillions, risk is everywhere. Its quantification and management are key to the success and failure of individuals, businesses and governments. Whether you're an interested observer or pursuing a career in risk, this book delves into the complex and multi-faceted work that actuaries undertake to quantify, manage and commodify risk-supporting our society and servicing a range of multi-billion-dollar industries. Starting at the most basic level, this book introduces key concepts in actuarial science, insurance and pensions. Through case studies, explanations and mathematical examples, it fosters an understanding of current industry practice. This book celebrates the long history of actuarial science and poses the problems facing actuaries in the future, exploring complex global risks including climate change, aging populations, healthcare models and pandemic epidemiology from an actuarial perspective. It gives practical advice for new and potential actuaries on how to identify an area of work to go into, how best to navigate (and pass!) actuarial exams and how to develop your skills post-qualification. A Risky Business illuminates how actuaries are central to society as we know it, revealing what they do and how they do it. It is the essential primer on actuarial science.
Based on a two-semester course aimed at illustrating various interactions of "pure mathematics" with other sciences, such as hydrodynamics, thermodynamics, statistical physics and information theory, this text unifies three general topics of analysis and physics, which are as follows: the dimensional analysis of physical quantities, which contains various applications including Kolmogorov's model for turbulence; functions of very large number of variables and the principle of concentration along with the non-linear law of large numbers, the geometric meaning of the Gauss and Maxwell distributions, and the Kotelnikov-Shannon theorem; and, finally, classical thermodynamics and contact geometry, which covers two main principles of thermodynamics in the language of differential forms, contact distributions, the Frobenius theorem and the Carnot-Caratheodory metric. It includes problems, historical remarks, and Zorich's popular article, "Mathematics as language and method."
Features content that has been used extensively in a university setting, allowing the reader to benefit from tried and tested methods, practices, and knowledge. In contrast to existing books on the market, it details the specialized packages that have been developed over the past decade, and focuses on pulling real-time data directly from free data sources on the internet. It achieves its goal by providing a large number of examples in hot topics such as machine learning. Assumes no prior knowledge of R, allowing it to be useful to a range of people from undergraduates to professionals. Comprehensive explanations make the reader proficient in a multitude of advanced methods, and provides overviews of many different resources that will be useful to the readers.
Recent books in the Wiley Series in Probability and Mathematical Statistics Editors Vic Barnett J. Stuart Hunter Adrian F.M. Smith Geoffrey S. Watson Ralph A. Bradley Joseph B. Kadane Stephen M. Stigler Nicholas I. Fisher David G. Kendall Jozef L. Teugels Optimal Design of Experiments Friedrich Pukelsheim, Universität Augsburg, Augsburg, Germany Optimal Design of Experiments presents the first complete theoretical development of optimal design for the linear model, a unified exposition that embraces a wide variety of design problems. It describes the statistical theory involved in designing experiments, and applies it to typical special cases. The design problems originating from statistics are solved using tools from linear algebra and convex analysis. The material is presented in a very clear, careful and organized way. Rather than assaulting traditional ways of thinking about optimal design, this book pulls together formerly separate entities to create a common framework for diverse design problems that share a common goal. Statisticians, mathematicians, engineers, and operations research specialists will find this book stimulating, challenging, and an asset to their work. 1993 Statistics for Spatial Data, Revised Edition Noel Cressie, Iowa State University, USA Designed for the scientific and engineering professional eager to exploit its enormous potential, Statistics for Spatial Data is a primer to the theory as well as the nuts-and-bolts of this influential technique. Focusing on the three areas of geostatistical data, lattice data, and point patterns, the book sheds light on the link between data and model, and reveals how spatial statistical models can be used to solve a host of problems in science and engineering. The previous edition was hailed by Mathematical Reviews as "an excellent book which…will become a basic reference". Revised to reflect state-of-the-art developments, this edition also features many detailed examples, numerous illustrations, and over 1000 references. The first fully comprehensive introduction, Statistics for Spatial Data is an essential guide for professionals in biology, earth sciences, civil, electrical and agricultural engineering, geography, epidemiology, and ecology. 1993
This book presents a general method for deriving higher-order statistics of multivariate distributions with simple algorithms that allow for actual calculations. Multivariate nonlinear statistical models require the study of higher-order moments and cumulants. The main tool used for the definitions is the tensor derivative, leading to several useful expressions concerning Hermite polynomials, moments, cumulants, skewness, and kurtosis. A general test of multivariate skewness and kurtosis is obtained from this treatment. Exercises are provided for each chapter to help the readers understand the methods. Lastly, the book includes a comprehensive list of references, equipping readers to explore further on their own.
This graduate-level textbook covers modelling, programming and analysis of stochastic computer simulation experiments, including the mathematical and statistical foundations of simulation and why it works. The book is rigorous and complete, but concise and accessible, providing all necessary background material. Object-oriented programming of simulations is illustrated in Python, while the majority of the book is programming language independent. In addition to covering the foundations of simulation and simulation programming for applications, the text prepares readers to use simulation in their research. A solutions manual for end-of-chapter exercises is available for instructors.
FCA is an important formalism that is associated with a variety of research areas such as lattice theory, knowledge representation, data mining, machine learning, and semantic Web. It is successfully exploited in an increasing number of application domains such as software engineering, information retrieval, social network analysis, and bioinformatics. Its mathematical power comes from its concept lattice formalization in which each element in the lattice captures a formal concept while the whole structure represents a conceptual hierarchy that offers browsing, clustering and association rule mining. Complex data analytics refers to advanced methods and tools for mining and analyzing data with complex structures such as XML/Json data, text and image data, multidimensional data, graphs, sequences and streaming data. It also covers visualization mechanisms used to highlight the discovered knowledge. This edited book examines a set of important and relevant research directions in complex data management, and updates the contribution of the FCA community in analyzing complex and large data such as knowledge graphs and interlinked contexts. For example, Formal Concept Analysis and some of its extensions are exploited, revisited and coupled with recent processing parallel and distributed paradigms to maximize the benefits in analyzing large data.
This is the first book to provide a systematic description of statistical properties of large-scale financial data. Specifically, the power-law and log-normal distributions observed at a given time and their changes using time-reversal symmetry, quasi-time-reversal symmetry, Gibrat's law, and the non-Gibrat's property observed in a short-term period are derived here. The statistical properties observed over a long-term period, such as power-law and exponential growth, are also derived. These subjects have not been thoroughly discussed in the field of economics in the past, and this book is a compilation of the author's series of studies by reconstructing the data analyses published in 15 academic journals with new data. This book provides readers with a theoretical and empirical understanding of how the statistical properties observed in firms' large-scale data are related along the time axis. It is possible to expand this discussion to understand theoretically and empirically how the statistical properties observed among differing large-scale financial data are related. This possibility provides readers with an approach to microfoundations, an important issue that has been studied in economics for many years.
This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping. Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression. Each section contains a set of problems ranging in difficulty from simple tomore complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus(R) are included to help build the intuition of readers.
Ordinal Data Modeling is a comprehensive treatment of ordinal data models from both likelihood and Bayesian perspectives. Written for graduate students and researchers in the statistical and social sciences, this book describes a coherent framework for understanding binary and ordinal regression models, item response models, graded response models, and ROC analyses, and for exposing the close connection between these models. A unique feature of this text is its emphasis on applications. All models developed in the book are motivated by real datasets, and considerable attention is devoted to the description of diagnostic plots and residual analyses. Software and datasets used for all analyses described in the text are available on websites listed in the preface.
The analysis and design of engineering and industrial systems has come to rely heavily on the use of optimization techniques. The theory developed over the last 40 years, coupled with an increasing number of powerful computational procedures, has made it possible to routinely solve problems arising in such diverse fields as aircraft design, material flow, curve fitting, capital expansion, and oil refining just to name a few. Mathematical programming plays a central role in each of these areas and can be considered the primary tool for systems optimization. Limits have been placed on the types of problems that can be solved, though, by the difficulty of handling functions that are not everywhere differentiable. To deal with real applications, it is often necessary to be able to optimize functions that while continuous are not differentiable in the classical sense. As the title of the book indicates, our chief concern is with (i) nondifferentiable mathematical programs, and (ii) two-level optimization problems. In the first half of the book, we study basic theory for general smooth and nonsmooth functions of many variables. After providing some background, we extend traditional (differentiable) nonlinear programming to the nondifferentiable case. The term used for the resultant problem is nondifferentiable mathematical programming. The major focus is on the derivation of optimality conditions for general nondifferentiable nonlinear programs. We introduce the concept of the generalized gradient and derive Kuhn-Tucker-type optimality conditions for the corresponding formulations.
This book is about random objects-sequences, processes, arrays, measures, functionals-with interesting symmetry properties. Here symmetry should beunderstoodinthebroadsenseofinvarianceunderafamily(notnecessarily a group) of measurable transformations. To be precise, it is not the random objects themselves but rather their distributions that are assumed to be symmetric. Though many probabilistic symmetries are conceivable and have been considered in various contexts, four of them-stationarity, contractability, exchangeability, and rotatability-stand out as especially interesting and - portant in several ways: Their study leads to some deep structural theorems of great beauty and signi?cance, they are intimately related to some basic areasofmodernprobabilitytheory, andtheyaremutuallyconnectedthrough a variety of basic relationships. The mentioned symmetries may be de?ned as invariance in distribution under shifts, contractions, permutations, and rotations. Stationarity being a familiar classical topic, treated extensively in many standard textbooks and monographs, most of our attention will be focused on the remaining three basic symmetries. The study of general probabilistic symmetries essentially originated with the work of de Finetti (1929-30), who proved by elementary means (no - vanced tools being yet available) the celebrated theorem named after him- the fact that every in?nite sequence of exchangeable events is mixed i.i.d.
Methods of reasoning lying at the heart of rational scientific inference are explored and applied in some 55 papers by contributors from industry, defense establishments, and academia, brought together under the sponsorship of the US Navy and several European and American chemical corporations. The
The food market is changing from a producer-controlled to a
consumer-directed market. A main driving force is consumer concern
about agricultural production methods and food safety. More than
before, the consumer demands transparency of the production and
processing chain.
The volume represents presentations given at the 86th annual meeting of the Psychometric Society, held virtually on July 19-23, 2021. About 500 individuals contributed paper presentations, symposiums, poster presentations, pre-conference workshops, keynote presentations, and invited presentations. Since the 77th meeting, Springer has published the conference proceedings volume from this annual meeting to allow presenters to share their work and ideas with the wider research community, while still undergoing a thorough review process. This proceedings covers a diverse set of psychometric topics, including item response theory, Bayesian models, reliability, longitudinal measures, and cognitive diagnostic models. |
You may like...
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,469
Discovery Miles 54 690
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
A Compendium of the Census of…
Massachusetts Bureau of Statistics O
Hardcover
R889
Discovery Miles 8 890
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
|