![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Approach your problems from It isn't that they can't see the right end and begin with the solution. the answers. Then one day, It is that they can't see the perhaps you will find the problem. final question. G.K. Chesterton. The Scandal 'The Hermit Clad 1n Crane of Father Brown 'The Point of Feathers' in R. van Gulik's a Pin'. The Chinese Maze Murders. Growing specialisation and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the "tree" of knowledge of mathematics and related fields does not grow only by putting forth new branches. It also happens, quite often in fact, that branches wich were thought to be completely disparate are suddenly seen to be related. Further, the kind and level of sophistication of mathematics applied in various sciences has changed drastically in recent years: measure theory is used (non-trivially) in regional and theoretical economics; algebraic geometry interacts with physics; the Minkowsky lemma, coding theory and the structure of water meet one another in packing and covering theory; quantum fields, crystal defects and mathematical programming profit from homotopy theory; Lie algebras are relevant to filtering; and prediction and electrical engineering can use Stein spaces. And in addition to this there are such new emerging subdisciplines as "experimental mathematics", "CFD" , "completely integrable systems", "chaos, synergetics and large-scale order", which are almost impossible to fit into the existing classification schemes. They draw upon widely different sections of mathematics.
In general terms, the shape of an object, data set, or image can be de fined as the total of all information that is invariant under translations, rotations, and isotropic rescalings. Thus two objects can be said to have the same shape if they are similar in the sense of Euclidean geometry. For example, all equilateral triangles have the same shape, and so do all cubes. In applications, bodies rarely have exactly the same shape within measure ment error. In such cases the variation in shape can often be the subject of statistical analysis. The last decade has seen a considerable growth in interest in the statis tical theory of shape. This has been the result of a synthesis of a number of different areas and a recognition that there is considerable common ground among these areas in their study of shape variation. Despite this synthesis of disciplines, there are several different schools of statistical shape analysis. One of these, the Kendall school of shape analysis, uses a variety of mathe matical tools from differential geometry and probability, and is the subject of this book. The book does not assume a particularly strong background by the reader in these subjects, and so a brief introduction is provided to each of these topics. Anyone who is unfamiliar with this material is advised to consult a more complete reference. As the literature on these subjects is vast, the introductory sections can be used as a brief guide to the literature."
The last two decades have seen enormous developments in statistical methods for incomplete data. The EM algorithm and its extensions, multiple imputation, and Markov Chain Monte Carlo provide a set of flexible and reliable tools from inference in large classes of missing-data problems. Yet, in practical terms, those developments have had surprisingly little impact on the way most data analysts handle missing values on a routine basis.
Introducing a novel approach to setting environmental pollution standards that allow for proper treatment of uncertainty and variation, this book surveys the forms of standards and proposes a new kind of "statistically verifiable ideal standard." Setting Environmental Standards includes: a current analysis regarding the treatment of uncertainty and variation in environmental standard setting a review of basic principles in standard setting, including costs, actions and effects, and benefits examples where uncertainty and variation have been well-treated in current practice as well as examples where clear deficiencies are apparent specific proposals for the future approach to setting environmental pollution standards - encompassing the anticipated elements of uncertainty and variability The issues discussed serve statisticians as well as those persons involved with environmental standards. Scientists in agencies responsible for setting standards, in organizations advising such agencies or working in industries subject to these standards, will find Setting Environmental Standards an invaluable reference.
This open access proceedings volume brings selected, peer-reviewed contributions presented at the Stochastic Transport in Upper Ocean Dynamics (STUOD) 2021 Workshop, held virtually and in person at the Imperial College London, UK, September 20-23, 2021. The STUOD project is supported by an ERC Synergy Grant, and led by Imperial College London, the National Institute for Research in Computer Science and Automatic Control (INRIA) and the French Research Institute for Exploitation of the Sea (IFREMER). The project aims to deliver new capabilities for assessing variability and uncertainty in upper ocean dynamics. It will provide decision makers a means of quantifying the effects of local patterns of sea level rise, heat uptake, carbon storage and change of oxygen content and pH in the ocean. Its multimodal monitoring will enhance the scientific understanding of marine debris transport, tracking of oil spills and accumulation of plastic in the sea. All topics of these proceedings are essential to the scientific foundations of oceanography which has a vital role in climate science. Studies convened in this volume focus on a range of fundamental areas, including: Observations at a high resolution of upper ocean properties such as temperature, salinity, topography, wind, waves and velocity; Large scale numerical simulations; Data-based stochastic equations for upper ocean dynamics that quantify simulation error; Stochastic data assimilation to reduce uncertainty. These fundamental subjects in modern science and technology are urgently required in order to meet the challenges of climate change faced today by human society. This proceedings volume represents a lasting legacy of crucial scientific expertise to help meet this ongoing challenge, for the benefit of academics and professionals in pure and applied mathematics, computational science, data analysis, data assimilation and oceanography.
The series is aimed specifically at publishing peer reviewed reviews and contributions presented at workshops and conferences. Each volume is associated with a particular conference, symposium or workshop. These events cover various topics within pure and applied mathematics and provide up-to-date coverage of new developments, methods and applications.
The present monograph is primarily an outgrowth of our own re search on certain aspects of Bayesian inference in finite population sampling. Finite population sampling has been an integral part of statistics since its beginning. The topic continues its impact in the theory and practice of statistics, especially for researchers in survey sampling. Inference for finite population sampling utilizes prior information either explicitly or implicitly. Bayesian inference makes explicit use of this information as part of the model. This is in striking con trast to design- based inference in survey sampling where prior knowledge is incorporated only as auxiliary information. On the other hand there is a elose relationship between the Bayesian ap proach and the superpopulation approach, although they differ in their foundational interpretations. Operationally, however, the dif ference is much less pronounced as many estimators obtained via superpopulation models are also obtainable as Bayes estimators, and vice versa. This monograph, does not aim to provide a complete up-to-date account of the Bayesian literature in finite population sampling. Rather, it treats the topics reflecting the authors' personal inter ests. Its main aim is to demonstrate that a variety of levels of prior information can be used in survey sampling in a Bayesian man ner. Situations considered range from a noninformative Bayesian justification of standard frequentist methods when the only prior information available is the belief in the exchangeablility of the units to a full-fledged Bayesian model."
Placing data in the context of the scientific discovery of knowledge through experimentation, Practical Data Analysis for Designed Experiments examines issues of comparing groups and sorting out factor effects and the consequences of imbalance and nesting, then works through more practical applications of the theory. Written in a modern and accessible manner, this book is a useful blend of theory and methods. Exercises included in the text are based on real experiments and real data.
'Et mm. ..., si j'avait su comment en revenir, One service mathematics has rendered the je n'y serais point all' '' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf IIClI.t to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'. able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."
In the 50 years that have passed since Alfred Latka's death in 1949 his position as the father of mathematical demography has been secure. With his first demographic papers in 1907 and 1911 (the latter co authored with F. R. Sharpe) he laid the foundations for stable population theory, and over the next decades both largely completed it and found convenient mathematical approximations that gave it practical applica tions. Since his time, the field has moved in several directions he did not foresee, but in the main it is still his. Despite Latka's stature, however, the reader still needs to hunt through the old journals to locate his principal works. As yet no exten sive collections of his papers are in print, and for his part he never as sembled his contributions into a single volume in English. He did so in French, in the two part Theorie Analytique des Associations Biologiques (1934, 1939). Drawing on his Elements of Physical Biology (1925) and most of his mathematical papers, Latka offered French readers insights into his biological thought and a concise and mathematically accessible summary of what he called recent contributions in demographic analy sis. We would be accurate in also calling it Latka's contributions in demographic analysis."
This book is concerned with the processing of signals that have been sampled and digitized. The authors present algorithms for the optimization, random simulation, and numerical integration of probability densities for applications of Bayesian inference to signal processing. In particular, methods are developed for the computation of marginal densities and evidence, and are applied to previously intractable problems either involving large numbers of parameters or where the signal model is of a complex form. The emphasis is on the applications of these methods notably to the restoration of digital audio recordings and biomedical data. After a chapter which sets out the main principles of Bayesian inference applied to signal processing, subsequent chapters cover numerical approaches to these techniques, the use of Markov chain Monte Carlo methods, the identification of abrupt changes in data using the Bayesian piecewise linear model, and identifying missing samples in digital audio signals.
Comprising the major theorems of probability theory and the measure theoretical foundations of the subject, the main topics treated here are independence, interchangeability, and martingales. Particular emphasis is placed upon stopping times, both as tools in proving theorems and as objects of interest themselves. No prior knowledge of measure theory is assumed and a unique feature of the book is the combined presentation of measure and probability. It is easily adapted for graduate students familiar with measure theory using the guidelines given. Special features include: A comprehensive treatment of the law of the iterated logarithm * The Marcinklewicz-Zygmund inequality, its extension to martingales and applications thereof * Development and applications of the second moment analogue of Walds equation * Limit theorems for martingale arrays; the central limit theorem for the interchangeable and martingale cases; moment convergence in the central limit theorem * Complete discussion, including central limit theorem, of the random casting of r balls into n cells * Recent martingale inequalities * Cram r-L vy theorem and factor-closed families of distributions.
Learn the basics of white noise theory with White Noise Distribution Theory. This book covers the mathematical foundation and key applications of white noise theory without requiring advanced knowledge in this area. This instructive text specifically focuses on relevant application topics such as integral kernel operators, Fourier transforms, Laplacian operators, white noise integration, Feynman integrals, and positive generalized functions. Extremely well-written by one of the field's leading researchers, White Noise Distribution Theory is destined to become the definitive introductory resource on this challenging topic.
This work provides descriptions, explanations and examples of the Bayesian approach to statistics, demonstrating the utility of Bayesian methods for analyzing real-world problems in the health sciences. The work considers the individual components of Bayesian analysis.;College or university bookstores may order five or more copies at a special student price, available on request from Marcel Dekker, Inc.
Multiple Comparisons covers all-pairwise comparisons, multiple comparisons with the best, and multiple comparisons with a control. Confidence intervals methods and stepwise methods are described. Abuses and misconceptions are exposed, and the reader is guided to the correct method for each problem. Connections with bioequivalence, drug stability, and toxicity studies are discussed. Applications are illustrated with real data, analyzed by computer packages. Extension to the General Linear Model is provided.
This text describes regression-based approaches to analyzing longitudinal and repeated measures data. It emphasizes statistical models, discusses the relationships between different approaches, and uses real data to illustrate practical applications. It uses commercially available software when it exists and illustrates the program code and output. The data appendix provides many real data sets-beyond those used for the examples-which can serve as the basis for exercises.
The authoritative contributions gathered in this volume reflect the state of the art in compositional data analysis (CoDa). The respective chapters cover all aspects of CoDa, ranging from mathematical theory, statistical methods and techniques to its broad range of applications in geochemistry, the life sciences and other disciplines. The selected and peer-reviewed papers were originally presented at the 6th International Workshop on Compositional Data Analysis, CoDaWork 2015, held in L'Escala (Girona), Spain. Compositional data is defined as vectors of positive components and constant sum, and, more generally, all those vectors representing parts of a whole which only carry relative information. Examples of compositional data can be found in many different fields such as geology, chemistry, economics, medicine, ecology and sociology. As most of the classical statistical techniques are incoherent on compositions, in the 1980s John Aitchison proposed the log-ratio approach to CoDa. This became the foundation of modern CoDa, which is now based on a specific geometric structure for the simplex, an appropriate representation of the sample space of compositional data. The International Workshops on Compositional Data Analysis offer a vital discussion forum for researchers and practitioners concerned with the statistical treatment and modelling of compositional data or other constrained data sets and the interpretation of models and their applications. The goal of the workshops is to summarize and share recent developments, and to identify important lines of future research.
Counting: The Art of Enumerative Combinatorics provides an introduction to discrete mathematics that addresses questions that begin, How many ways are there to...For example, ¿How many ways are there to order a collection of 12 ice cream cones if 8 flavors are available?¿ At the end of the book the reader should be able to answer such nontrivial counting questions as, ¿How many ways are there to color the faces of a cube if ¿k¿ colors are available with each face having exactly one color?¿ or ¿How many ways are there to stack ¿n¿ poker chips, each of which can be red, white, blue, or green, such that each red chip is adjacent to at least 1 green chip?¿ Since there are no prerequisites, this book can be used for college courses in combinatorics at the sophomore level for either computer science or mathematics students. The first five chapters have served as the basis for a graduate course for in-service teachers. Chapter 8 introduces graph theory.
This book is for students taking either a first-year graduate statistics course or an advanced undergraduate statistics course in Psychology. Enough introductory statistics is briefly reviewed to bring everyone up to speed. The book is highly user-friendly without sacrificing rigor, not only in anticipating students' questions, but also in paying attention to the introduction of new methods and notation. In addition, many topics given only casual or superficial treatment are elaborated here, such as: the nature of interaction and its interpretation, in terms of theory and response scale transformations; generalized forms of analysis of covariance; extensive coverage of multiple comparison methods; coverage of nonorthogonal designs; and discussion of functional measurement. The text is structured for reading in multiple passes of increasing depth; for the student who desires deeper understanding, there are optional sections; for the student who is or becomes proficient in matrix algebra, there are still deeper optional sections. The book is also equipped with an excellent set of class-tested exercises and answers.
The statistical and mathematical principles of smoothing with a focus on applicable techniques are presented in this book. It naturally splits into two parts: The first part is intended for undergraduate students majoring in mathematics, statistics, econometrics or biometrics whereas the second part is intended to be used by master and PhD students or researchers. The material is easy to accomplish since the e-book character of the text gives a maximum of flexibility in learning (and teaching) intensity.
In a family study of breast cancer, epidemiologists in Southern California increase the power for detecting a gene-environment interaction. In Gambia, a study helps a vaccination program reduce the incidence of Hepatitis B carriage. Archaeologists in Austria place a Bronze Age site in its true temporal location on the calendar scale. And in France, researchers map a rare disease with relatively little variation.
Second Edition offers a comprehensive presentation of scientific sampling principles and shows how to design a sample survey and analyze the resulting data. Demonstrates the validity of theorems and statements without resorting to detailed proofs.
'Et moi, ..., si j' avait su comment en revenir, One service mathematics has rendered the human race. It has put common sense back je n'y serais point aIle.' Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'" able 10 do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound_ Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series." |
You may like...
Proceedings of the 2019 DigitalFUTURES…
Philip F. Yuan, Yi Min (Mike) Xie, …
Hardcover
R4,064
Discovery Miles 40 640
Proceedings of the 2020 DigitalFUTURES…
Philip F. Yuan, Jiawei Yao, …
Hardcover
R1,560
Discovery Miles 15 600
Combinatorial Optimization Problems in…
Michael Z. Zgurovsky, Alexander A. Pavlov
Hardcover
R4,102
Discovery Miles 41 020
Analysis of Engineering Drawings and…
Thomas C. Henderson
Hardcover
Mathematical Models for Remote Sensing…
Gabriele Moser, Josiane Zerubia
Hardcover
R5,178
Discovery Miles 51 780
Intelligent Systems for Crisis…
Orhan Altan, Madhu Chandra, …
Hardcover
R5,179
Discovery Miles 51 790
OpenStreetMap in GIScience…
Jamal Jokar Arsanjani, Alexander Zipf, …
Hardcover
|