![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The concise yet authoritative presentation of key techniques for basic mixtures experiments Inspired by the author's bestselling advanced book on the topic, A Primer on Experiments with Mixtures provides an introductory presentation of the key principles behind experimenting with mixtures. Outlining useful techniques through an applied approach with examples from real research situations, the book supplies a comprehensive discussion of how to design and set up basic mixture experiments, then analyze the data and draw inferences from results. Drawing from his extensive experience teaching the topic at various levels, the author presents the mixture experiments in an easy-to-follow manner that is void of unnecessary formulas and theory. Succinct presentations explore key methods and techniques for carrying out basic mixture experiments, including: * Designs and models for exploring the entire simplex factor space, with coverage of simplex-lattice and simplex-centroid designs, canonical polynomials, the plotting of individual residuals, and axial designs * Multiple constraints on the component proportions in the form of lower and/or upper bounds, introducing L-Pseudocomponents, multicomponent constraints, and multiple lattice designs for major and minor component classifications * Techniques for analyzing mixture data such as model reduction and screening components, as well as additional topics such as measuring the leverage of certain design points * Models containing ratios of the components, Cox's mixture polynomials, and the fitting of a slack variable model * A review of least squares and the analysis of variance for fitting data Each chapter concludes with a summary and appendices with details on the technical aspects of the material. Throughout the book, exercise sets with selected answers allow readers to test their comprehension of the material, and References and Recommended Reading sections outline further resources for study of the presented topics. A Primer on Experiments with Mixtures is an excellent book for one-semester courses on mixture designs and can also serve as a supplement for design of experiments courses at the upper-undergraduate and graduate levels. It is also a suitable reference for practitioners and researchers who have an interest in experiments with mixtures and would like to learn more about the related mixture designs and models.
Mathematical epidemiology of infectious diseases usually involves describing the flow of individuals between mutually exclusive infection states. One of the key parameters describing the transition from the susceptible to the infected class is the hazard of infection, often referred to as the force of infection. The force of infection reflects the degree of contact with potential for transmission between infected and susceptible individuals. The mathematical relation between the force of infection and effective contact patterns is generally assumed to be subjected to the mass action principle, which yields the necessary information to estimate the basic reproduction number, another key parameter in infectious disease epidemiology. It is within this context that the Center for Statistics (CenStat, I-Biostat, Hasselt University) and the Centre for the Evaluation of Vaccination and the Centre for Health Economic Research and Modelling Infectious Diseases (CEV, CHERMID, Vaccine and Infectious Disease Institute, University of Antwerp) have collaborated over the past 15 years. This book demonstrates the past and current research activities of these institutes and can be considered to be a milestone in this collaboration. This book is focused on the application of modern statistical methods and models to estimate infectious disease parameters. We want to provide the readers with software guidance, such as R packages, and with data, as far as they can be made publicly available.
This book presents some recent developments in correlated data analysis. It utilizes the class of dispersion models as marginal components in the formulation of joint models for correlated data. This enables the book to handle a broader range of data types than those analyzed by traditional generalized linear models. One example is correlated angular data. This book provides a systematic treatment for the topic of estimating functions. Under this framework, both generalized estimating equations (GEE) and quadratic inference functions (QIF) are studied as special cases. In addition to marginal models and mixed-effects models, this book covers topics on joint regression analysis based on Gaussian copulas and generalized state space models for longitudinal data from long time series. Various real-world data examples, numerical illustrations and software usage tips are presented throughout the book. This book has evolved from lecture notes on longitudinal data analysis, and may be considered suitable as a textbook for a graduate course on correlated data analysis. This book is inclined more towards technical details regarding the underlying theory and methodology used in software-based applications. Therefore, the book will serve as a useful reference for those who want theoretical explanations to puzzles arising from data analyses or deeper understanding of underlying theory related to analyses.
Nature didn't design human beings to be statisticians, and in fact our minds are more naturally attuned to spotting the saber-toothed tiger than seeing the jungle he springs from. Yet scienti?c discovery in practice is often more jungle than tiger. Those of us who devote our scienti?c lives to the deep and satisfying subject of statistical inference usually do so in the face of a certain under-appreciation from the public, and also (though less so these days) from the wider scienti?c world. With this in mind, it feels very nice to be over-appreciated for a while, even at the expense of weathering a 70th birthday. (Are we certain that some terrible chronological error hasn't been made?) Carl Morris and Rob Tibshirani, the two colleagues I've worked most closely with, both 't my ideal pro?le of the statistician as a mathematical scientist working seamlessly across wide areas of theory and application. They seem to have chosen the papers here in the same catholic spirit, and then cajoled an all-star cast of statistical savants to comment on them.
Combines recent developments in resampling technology (including the bootstrap) with new methods for multiple testing that are easy to use, convenient to report and widely applicable. Software from SAS Institute is available to execute many of the methods and programming is straightforward for other applications. Explains how to summarize results using adjusted p -values which do not necessitate cumbersome table look-ups. Demonstrates how to incorporate logical constraints among hypotheses, further improving power.
This compilation focuses on the theory and conceptualisation of statistics and probability in the early years and the development of young children's (ages 3-10) understanding of data and chance. It provides a comprehensive overview of cutting-edge international research on the development of young learners' reasoning about data and chance in formal, informal, and non-formal educational contexts. The authors share insights into young children's statistical and probabilistic reasoning and provide early childhood educators and researchers with a wealth of illustrative examples, suggestions, and practical strategies on how to address the challenges arising from the introduction of statistical and probabilistic concepts in pre-school and school curricula. This collection will inform practices in research and teaching by providing a detailed account of current best practices, challenges, and issues, and of future trends and directions in early statistical and probabilistic learning worldwide. Further, it will contribute to future research and theory building by addressing theoretical, epistemological, and methodological considerations regarding the design of probability and statistics learning environments for young children.
This volume contains the proceedings of the XII Symposium of Probability and Stochastic Processes which took place at Universidad Autonoma de Yucatan in Merida, Mexico, on November 16-20, 2015. This meeting was the twelfth meeting in a series of ongoing biannual meetings aimed at showcasing the research of Mexican probabilists as well as promote new collaborations between the participants. The book features articles drawn from different research areas in probability and stochastic processes, such as: risk theory, limit theorems, stochastic partial differential equations, random trees, stochastic differential games, stochastic control, and coalescence. Two of the main manuscripts survey recent developments on stochastic control and scaling limits of Markov-branching trees, written by Kazutoshi Yamasaki and Benedicte Haas, respectively. The research-oriented manuscripts provide new advances in active research fields in Mexico. The wide selection of topics makes the book accessible to advanced graduate students and researchers in probability and stochastic processes.
This monograph focuses on the construction of regression models with linear and non-linear constrain inequalities from the theoretical point of view. Unlike previous publications, this volume analyses the properties of regression with inequality constrains, investigating the flexibility of inequality constrains and their ability to adapt in the presence of additional a priori information The implementation of inequality constrains improves the accuracy of models, and decreases the likelihood of errors. Based on the obtained theoretical results, a computational technique for estimation and prognostication problems is suggested. This approach lends itself to numerous applications in various practical problems, several of which are discussed in detail The book is useful resource for graduate students, PhD students, as well as for researchers who specialize in applied statistics and optimization. This book may also be useful to specialists in other branches of applied mathematics, technology, econometrics and finance
Probabilistic Methods for Financial and Marketing Informatics aims to provide students with insights and a guide explaining how to apply probabilistic reasoning to business problems. Rather than dwelling on rigor, algorithms, and proofs of theorems, the authors concentrate on showing examples and using the software package Netica to represent and solve problems. The book contains unique coverage of probabilistic reasoning topics applied to business problems, including marketing, banking, operations management, and finance. It shares insights about when and why probabilistic methods can and cannot be used effectively. This book is recommended for all R&D professionals and students who are involved with industrial informatics, that is, applying the methodologies of computer science and engineering to business or industry information. This includes computer science and other professionals in the data management and data mining field whose interests are business and marketing information in general, and who want to apply AI and probabilistic methods to their problems in order to better predict how well a product or service will do in a particular market, for instance. Typical fields where this technology is used are in advertising, venture capital decision making, operational risk measurement in any industry, credit scoring, and investment science.
This book lays the foundations for a theory on almost periodic stochastic processes and their applications to various stochastic differential equations, functional differential equations with delay, partial differential equations, and difference equations. It is in part a sequel of authors recent work on almost periodic stochastic difference and differential equations and has the particularity to be the first book that is entirely devoted to almost periodic random processes and their applications. The topics treated in it range from existence, uniqueness, and stability of solutions for abstract stochastic difference and differential equations.
Analyzing observed or measured data is an important step in applied sciences. The recent increase in computer capacity has resulted in a revolution both in data collection and data analysis. An increasing number of scientists, researchers and students are venturing into statistical data analysis; hence the need for more guidance in this field, which was previously dominated mainly by statisticians. This handbook fills the gap in the range of textbooks on data analysis. Written in a dictionary format, it will serve as a comprehensive reference book in a rapidly growing field. However, this book is more structured than an ordinary dictionary, where each entry is a separate, self-contained entity. The authors provide not only definitions and short descriptions, but also offer an overview of the different topics. Therefore, the handbook can also be used as a companion to textbooks for undergraduate or graduate courses. 1700 entries are given in alphabetical order grouped into 20 topics and each topic is organized in a hierarchical fashion. Additional specific entries on a topic can be easily found by following the cross-references in a top-down manner. Several figures and tables are provided to enhance the comprehension of the topics and a list of acronyms helps to locate the full terminologies. The bibliography offers suggestions for further reading.
These proceedings from the 37th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2017), held in Sao Carlos, Brazil, aim to expand the available research on Bayesian methods and promote their application in the scientific community. They gather research from scholars in many different fields who use inductive statistics methods and focus on the foundations of the Bayesian paradigm, their comparison to objectivistic or frequentist statistics counterparts, and their appropriate applications. Interest in the foundations of inductive statistics has been growing with the increasing availability of Bayesian methodological alternatives, and scientists now face much more difficult choices in finding the optimal methods to apply to their problems. By carefully examining and discussing the relevant foundations, the scientific community can avoid applying Bayesian methods on a merely ad hoc basis. For over 35 years, the MaxEnt workshops have explored the use of Bayesian and Maximum Entropy methods in scientific and engineering application contexts. The workshops welcome contributions on all aspects of probabilistic inference, including novel techniques and applications, and work that sheds new light on the foundations of inference. Areas of application in these workshops include astronomy and astrophysics, chemistry, communications theory, cosmology, climate studies, earth science, fluid mechanics, genetics, geophysics, machine learning, materials science, medical imaging, nanoscience, source separation, thermodynamics (equilibrium and non-equilibrium), particle physics, plasma physics, quantum mechanics, robotics, and the social sciences. Bayesian computational techniques such as Markov chain Monte Carlo sampling are also regular topics, as are approximate inferential methods. Foundational issues involving probability theory and information theory, as well as novel applications of inference to illuminate the foundations of physical theories, are also of keen interest.
In April 2007, the Deutsche Forschungsgemeinschaft (DFG) approved the Priority Program 1324 "Mathematical Methods for Extracting Quantifiable Information from Complex Systems." This volume presents a comprehensive overview of the most important results obtained over the course of the program. Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance. Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges. Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as well as the development of new and efficient numerical algorithms were among the main goals of this Priority Program. The treatment of high-dimensional systems is clearly one of the most challenging tasks in applied mathematics today. Since the problem of high-dimensionality appears in many fields of application, the above-mentioned synergy and cross-fertilization effects were expected to make a great impact. To be truly successful, the following issues had to be kept in mind: theoretical research and practical applications had to be developed hand in hand; moreover, it has proven necessary to combine different fields of mathematics, such as numerical analysis and computational stochastics. To keep the whole program sufficiently focused, we concentrated on specific but related fields of application that share common characteristics and as such, they allowed us to use closely related approaches.
This Handbook covers latent variable models, which are a flexible
class of models for modeling multivariate data to explore
relationships among observed and latent variables.
This book covers the method of metric distances and its application in probability theory and other fields. The method is fundamental in the study of limit theorems and generally in assessing the quality of approximations to a given probabilistic model. The method of metric distances is developed to study stability problems and reduces to the selection of an ideal or the most appropriate metric for the problem under consideration and a comparison of probability metrics. After describing the basic structure of probability metrics and providing an analysis of the topologies in the space of probability measures generated by different types of probability metrics, the authors study stability problems by providing a characterization of the ideal metrics for a given problem and investigating the main relationships between different types of probability metrics. The presentation is provided in a general form, although specific cases are considered as they arise in the process of finding supplementary bounds or in applications to important special cases. Svetlozar T. Rachev is the Frey Family Foundation Chair of Quantitative Finance, Department of Applied Mathematics and Statistics, SUNY-Stony Brook and Chief Scientist of Finanlytica, USA. Lev B. Klebanov is a Professor in the Department of Probability and Mathematical Statistics, Charles University, Prague, Czech Republic. Stoyan V. Stoyanov is a Professor at EDHEC Business School and Head of Research, EDHEC-Risk Institute-Asia (Singapore). Frank J. Fabozzi is a Professor at EDHEC Business School. (USA)
Limit theorems and asymptotic results form a central topic in probability theory and mathematical statistics. New and non-classical limit theorems have been discovered for processes in random environments, especially in connection with random matrix theory and free probability. These questions and the techniques for answering them combine asymptotic enumerative combinatorics, particle systems and approximation theory, and are important for new approaches in geometric and metric number theory as well. Thus, the contributions in this book include a wide range of applications with surprising connections ranging from longest common subsequences for words, permutation groups, random matrices and free probability to entropy problems and metric number theory. The book is the product of a conference that took place in August 2011 in Bielefeld, Germany to celebrate the 60th birthday of Friedrich Gotze, a noted expert in this field."
Mean field approximation has been adopted to describe macroscopic phenomena from microscopic overviews. It is still in progress; fluid mechanics, gauge theory, plasma physics, quantum chemistry, mathematical oncology, non-equilibirum thermodynamics. spite of such a wide range of scientific areas that are concerned with the mean field theory, a unified study of its mathematical structure has not been discussed explicitly in the open literature. The benefit of this point of view on nonlinear problems should have significant impact on future research, as will be seen from the underlying features of self-assembly or bottom-up self-organization which is to be illustrated in a unified way. The aim of this book is to formulate the variational and hierarchical aspects of the equations that arise in the mean field theory from macroscopic profiles to microscopic principles, from dynamics to equilibrium, and from biological models to models that arise from chemistry and physics.
This book brings together important topics of current research in probabilistic graphical modeling, learning from data and probabilistic inference. Coverage includes such topics as the characterization of conditional independence, the learning of graphical models with latent variables, and extensions to the influence diagram formalism as well as important application fields, such as the control of vehicles, bioinformatics and medicine.
This volume celebrates the tenth edition of the Brazilian School of Probability (EBP), held at IMPA, Rio de Janeiro, from July 30 to August 4, 2006, jointly with the 69th Annual Meeting of the Institute of Mathematical Statistics. It was indeed an exceptional occasion for the local community working in this ?eld. The EBP, ?rst envisioned and organized in 1997, has since developed into an annual meeting with two or three advanced mini-courses and a high level conference. This volume grew up from invited or contributed articles by researchers that during the last ten yearshave been participating in the BrazilianSchool of Pro- bility. As a consequence, its content partially re?ects the topics that have pred- inated in the activities during the various editions of the School, with a strong - peal that comes from statistical mechanics and areasof concentrationthat include interacting particlesystems, percolation, random media anddisordered systems. All articles of this volume were peer-refereed.
This book is dedicated to Prof. Peter Young on his 70th birthday. Professor Young has been a pioneer in systems and control, and over the past 45 years he has influenced many developments in this field. This volume comprises a collection of contributions by leading experts in system identification, time-series analysis, environmetric modelling and control system design - modern research in topics that reflect important areas of interest in Professor Young's research career. Recent theoretical developments in and relevant applications of these areas are explored treating the various subjects broadly and in depth. The authoritative and up-to-date research presented here will be of interest to academic researcher in control and disciplines related to environmental research, particularly those to with water systems. The tutorial style in which many of the contributions are composed also makes the book suitable as a source of study material for graduate students in those areas.
Long-rangedependent, or long-memory,time seriesarestationarytime series displaying a statistically signi?cant dependence between very distant obs- vations. We formalize this dependence by assuming that the autocorrelation function of these stationary series decays very slowly, hyperbolically, as a function of the time lag. Many economic series display these empirical features: volatility of asset prices returns, future interest rates, etc. There is a huge statistical literature on long-memory processes, some of this research is highly technical, so that it is cited, but often misused in the applied econometrics and empirical e- nomics literature. The ?rst purpose of this book is to present in a formal and pedagogical way some statistical methods for studying long-range dependent processes. Furthermore, the occurrence of long-memory in economic time series might be a statistical artefact as the hyperbolic decay of the sample autoc- relation function does not necessarily derive from long-range dependent p- cesses. Indeed, the realizations of non-homogeneous processes, e.g., switching regime and change-point processes, display the same empirical features. We thus also present in this book recent statistical methods able to discriminate between the long-memory and change-point alternatives. Going beyond the purely statistical analysis of economic series, it is of interest to determine which economic mechanisms are generating the strong dependence properties of economic series, whether they are genuine, or spu- ous. The regularities of the long-memory and change-point properties across economic time series, e.g., common degree of long-range dependence and/or common change-points, suggest the existence of a common economic cause.
Geostatistics is a common tool in reservoir characterization. Written from the basics of statistics, this book covers only those topics that are needed for the two goals of the text: to exhibit the diagnostic potential of statistics and to introduce the important features of statistical modelling. This revised edition contains expanded discussions of some materials, in particular conditional probabilities, Bayes Theorem, correlation, and Kriging. The coverage of estimation, variability, and modelling applications have been updated. Seventy examples illustrate concepts and show the role of geology for providing important information for data analysis and model building. Four reservoir case studies conclude the presentation, illustrating the application and importance of the earlier material. This book aims to help petroleum professionals develop more accurate models, leading to lower sampling costs. It is an ideal book for petroleum engineers, geoscientists, hydrologists, and faculty and students in these and related fields.
The first edition of this classic book has become the authoritative reference for physicists desiring to master the finer points of statistical data analysis. This second edition contains all the important material of the first, much of it unavailable from any other sources. In addition, many chapters have been updated with considerable new material, especially in areas concerning the theory and practice of confidence intervals, including the important Feldman?Cousins method. Both frequentist and Bayesian methodologies are presented, with a strong emphasis on techniques useful to physicists and other scientists in the interpretation of experimental data and comparison with scientific theories. This is a valuable textbook for advanced graduate students in the physical sciences as well as a reference for active researchers.
This volume collects a selection of refereed papers of the more than one hundred presented at the InternationalConference MAF 2008 - Mathematicaland Statistical Methods for Actuarial Sciences and Finance. The conference was organised by the Department of Applied Mathematics and theDepartment ofStatisticsoftheUniversityCa'Foscari Venice(Italy), withthec- laborationofthe Department ofEconomics and StatisticalSciences ofthe University ofSalerno(Italy).Itwas heldinVenice, fromMarch 26to28,2008, attheprestigious CavalliFranchettipalace, alongGrand Canal, oftheIstitutoVenetodiScienze, Lettere ed Arti. This conference was the ?rst international edition of a biennial national series begunin2004, whichwas bornof thebrilliantbeliefofthe colleagues -and friends- oftheDepartmentofEconomicsandStatisticalSciences oftheUniversityofSalerno: the idea following which the cooperation between mathematicians and statisticians in working in actuarial sciences, in insurance and in ?nance can improve research on these topics. The proof of this consists in the wide participation in these events. In particular, with reference to the 2008 internationaledition: - More than 150 attendants, both academicians and practitioners; - More than 100 accepted communications, organised in 26 parallel sessions, from authors coming from about twenty countries (namely: Canada, Colombia, Czech Republic, France, Germany, Great Britain, Greece, Hungary, Ireland, Israel, Italy, Japan, Poland, Spain, Sweden, Switzerland, Taiwan, USA); - two plenary guest-organised sessions; and - aprestigiouskeynotelecturedeliveredbyProfessorWolfgangHa ]rdleoftheH- boldt Universityof Berlin (Germany)
This volume, dedicated to Carl Pearcy on the occasion of his 60th birthday, presents recent results in operator theory, nonselfadjoint operator algebras, measure theory and the theory of moments. The articles on these subjects have been contributed by leading area experts, many of whom were associated with Carl Pearcy as students or collaborators. The book testifies to his multifaceted interests and includes a biographical sketch and a list of publications. |
You may like...
29th European Symposium on Computer…
Anton A Kiss, Edwin Zondervan, …
Hardcover
R11,317
Discovery Miles 113 170
Handbook of Research on Managerial…
Patricia Ordonez De Pablos, Xi Zhang, …
Hardcover
R6,457
Discovery Miles 64 570
|