![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book explores different statistical quality technologies including recent advances and applications. Statistical process control, acceptance sample plans and reliability assessment are some of the essential statistical techniques in quality technologies to ensure high quality products and to reduce consumer and producer risks. Numerous statistical techniques and methodologies for quality control and improvement have been developed in recent years to help resolve current product quality issues in today's fast changing environment. Featuring contributions from top experts in the field, this book covers three major topics: statistical process control, acceptance sampling plans, and reliability testing and designs. The topics covered in the book are timely and have a high potential impact and influence to academics, scholars, students and professionals in statistics, engineering, manufacturing and health.
In recent years, the study of the theory of Brownian motion has
become a powerful tool in the solution of problems in mathematical
physics. This self-contained and readable exposition by leading
authors, provides a rigorous account of the subject, emphasizing
the "explicit" rather than the "concise" where necessary, and
addressed to readers interested in probability theory as applied to
analysis and mathematical physics.
This proceedings volume presents new methods and applications in Operational Research and Management Science with a special focus on Business Analytics. Featuring selected contributions from the XIV Balkan Conference on Operational Research held in Thessaloniki, Greece in 2020 (BALCOR 2020), it addresses applications and methodological tools or techniques in various areas of Operational Research, such as agent-based modelling, big data and business analytics, data envelopment analysis, data mining, decision support systems, fuzzy systems, game theory, heuristics, metaheuristics and nature inspired optimization algorithms, linear and nonlinear programming, machine learning, multiple criteria decision analysis, network design and optimization, queuing theory, simulation and statistics.
This book presents up-to-date mathematical results in asymptotic theory on nonlinear regression on the basis of various asymptotic expansions of least squares, its characteristics, and its distribution functions of functionals of Least Squares Estimator. It is divided into four chapters. In Chapter 1 assertions on the probability of large deviation of normal Least Squares Estimator of regression function parameters are made. Chapter 2 indicates conditions for Least Moduli Estimator asymptotic normality. An asymptotic expansion of Least Squares Estimator as well as its distribution function are obtained and two initial terms of these asymptotic expansions are calculated. Separately, the Berry-Esseen inequality for Least Squares Estimator distribution is deduced. In the third chapter asymptotic expansions related to functionals of Least Squares Estimator are dealt with. Lastly, Chapter 4 offers a comparison of the powers of statistical tests based on Least Squares Estimators. The Appendix gives an overview of subsidiary facts and a list of principal notations. Additional background information, grouped per chapter, is presented in the Commentary section. The volume concludes with an extensive Bibliography. Audience: This book will be of interest to mathematicians and statisticians whose work involves stochastic analysis, probability theory, mathematics of engineering, mathematical modelling, systems theory or cybernetics.
I became interested in Random Vibration during the preparation of my PhD dissertation, which was concerned with the seismic response of nuclear reactor cores. I was initiated into this field through the cla.ssical books by Y.K.Lin, S.H.Crandall and a few others. After the completion of my PhD, in 1981, my supervisor M.Gera.din encouraged me to prepare a course in Random Vibration for fourth and fifth year students in Aeronautics, at the University of Liege. There was at the time very little material available in French on that subject. A first draft was produced during 1983 and 1984 and revised in 1986. These notes were published by the Presses Poly techniques et Universitaires Romandes (Lausanne, Suisse) in 1990. When Kluwer decided to publish an English translation ofthe book in 1992, I had to choose between letting Kluwer translate the French text in-extenso or doing it myself, which would allow me to carry out a sustantial revision of the book. I took the second option and decided to rewrite or delete some of the original text and include new material, based on my personal experience, or reflecting recent technical advances. Chapter 6, devoted to the response of multi degree offreedom structures, has been completely rewritten, and Chapter 11 on random fatigue is entirely new. The computer programs which have been developed in parallel with these chapters have been incorporated in the general purpose finite element software SAMCEF, developed at the University of Liege.
This book aims to present the impact of Artificial Intelligence (AI) and Big Data in healthcare for medical decision making and data analysis in myriad fields including Radiology, Radiomics, Radiogenomics, Oncology, Pharmacology, COVID-19 prognosis, Cardiac imaging, Neuroradiology, Psychiatry and others. This will include topics such as Artificial Intelligence of Thing (AIOT), Explainable Artificial Intelligence (XAI), Distributed learning, Blockchain of Internet of Things (BIOT), Cybersecurity, and Internet of (Medical) Things (IoTs). Healthcare providers will learn how to leverage Big Data analytics and AI as methodology for accurate analysis based on their clinical data repositories and clinical decision support. The capacity to recognize patterns and transform large amounts of data into usable information for precision medicine assists healthcare professionals in achieving these objectives. Intelligent Health has the potential to monitor patients at risk with underlying conditions and track their progress during therapy. Some of the greatest challenges in using these technologies are based on legal and ethical concerns of using medical data and adequately representing and servicing disparate patient populations. One major potential benefit of this technology is to make health systems more sustainable and standardized. Privacy and data security, establishing protocols, appropriate governance, and improving technologies will be among the crucial priorities for Digital Transformation in Healthcare.
Aggregation plays a central role in many of the technological tasks we are faced with. The importance of this process will become even greater as we move more and more toward becoming an information-cent.ered society, us is happening with the rapid growth of the Internet and the World Wirle Weh. Here we shall be faced with many issues related to the fusion of information. One very pressing issue here is the development of mechanisms to help search for information, a problem that clearly has a strong aggregation-related component. More generally, in order to model the sophisticated ways in which human beings process information, as well as going beyond the human capa bilities, we need provide a basket of aggregation tools. The centrality of aggregation in human thought can be be very clearly seen by looking at neural networks, a technology motivated by modeling the human brain. One can see that the basic operations involved in these networks are learning and aggregation. The Ordered Weighted Averaging (OWA) operators provide a parameter ized family of aggregation operators which include many of the well-known operators such as the maximum, minimum and the simple average."
"Configural Frequency Analysis" (CFA) provides an up-to-the-minute
comprehensive introduction to its techniques, models, and
applications. Written in a formal yet accessible style, actual
empirical data examples are used to illustrate key concepts.
Step-by-step program sequences are used to show readers how to
employ CFA methods using commercial software packages, such as SAS,
SPSS, SYSTAT, S-Plus, or those written specifically to perform CFA.
"Configural Frequency Analysis" (CFA) provides an up-to-the-minute
comprehensive introduction to its techniques, models, and
applications. Written in a formal yet accessible style, actual
empirical data examples are used to illustrate key concepts.
Step-by-step program sequences are used to show readers how to
employ CFA methods using commercial software packages, such as SAS,
SPSS, SYSTAT, S-Plus, or those written specifically to perform CFA.
Algebraic statistics is a rapidly developing field, where ideas from statistics and algebra meet and stimulate new research directions. One of the origins of algebraic statistics is the work by Diaconis and Sturmfels in 1998 on the use of Grobner bases for constructing a connected Markov chain for performing conditional tests of a discrete exponential family.In this book we take up this topic and present a detailed summary of developments following the seminal work of Diaconis and Sturmfels. This book is intended for statisticians with minimal backgrounds in algebra.As we ourselves learned algebraic notions through working on statistical problems and collaborating with notable algebraists, we hope that this book with many practical statistical problems is useful for statisticians to start working on the field."
Providing researchers in economics, finance, and statistics with an up-to-date introduction to applying Bayesian techniques to empirical studies, this book covers the full range of the new numerical techniques which have been developed over the last thirty years. Notably, these are: Monte Carlo sampling, antithetic replication, importance sampling, and Gibbs sampling. The author covers both advances in theory and modern approaches to numerical and applied problems, and includes applications drawn from a variety of different fields within economics, while also providing a quick overview of the underlying statistical ideas of Bayesian thought. The result is a book which presents a roadmap of applied economic questions that can now be addressed empirically with Bayesian methods. Consequently, many researchers will find this a readily readable survey of this growing topic.
Identifying the sources and measuring the impact of haphazard variations are important in any number of research applications, from clinical trials and genetics to industrial design and psychometric testing. Only in very simple situations can such variations be represented effectively by independent, identically distributed random variables or by random sampling from a hypothetical infinite population.
'Et moi, ..~ si lavait su CO.llUlJalt en revc:nir, One acMcc matbcmatica bu JaIdcred the human rac:c. It bu put COIDIDOD _ beet je n'y serais point aBe.' Jules Verne wbac it bdoup, 0Jl !be~ IbcII _t to !be dusty cauialcr Iabc&d 'diMardod__ The series is divergent; thc:reforc we may be -'. I!.ticT. Bc:I1 able to do something with it. O. Hcavisidc Mathematics is a tool for thought. A highly necessary tool in a world when: both feedback and non- linearities abound. Similarly. all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statcmalts as: 'One service topology has rendered mathematical physics ...*; 'One service logic has rendered c0m- puter science ...'; 'One service category theory has rendered mathematics ...'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series. This series, Mathematics and Its Applications. started in 19n. Now that over one hundred volumes have appeared it seems opportune to reexamine its scope. At the time I wrote "Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However. the 'tree' of knowledge of mathematics and related fields does not grow only by putting forth new branc:hes. It also happens, quite often in fact, that branches which were thought to be completely.
For almost fifty years, Richard M. Dudley has been extremely influential in the development of several areas of Probability. His work on Gaussian processes led to the understanding of the basic fact that their sample boundedness and continuity should be characterized in terms of proper measures of complexity of their parameter spaces equipped with the intrinsic covariance metric. His sufficient condition for sample continuity in terms of metric entropy is widely used and was proved by X. Fernique to be necessary for stationary Gaussian processes, whereas its more subtle versions (majorizing measures) were proved by M. Talagrand to be necessary in general. Together with V. N. Vapnik and A. Y. Cervonenkis, R. M. Dudley is a founder of the modern theory of empirical processes in general spaces. His work on uniform central limit theorems (under bracketing entropy conditions and for Vapnik-Cervonenkis classes), greatly extends classical results that go back to A. N. Kolmogorov and M. D. Donsker, and became the starting point of a new line of research, continued in the work of Dudley and others, that developed empirical processes into one of the major tools in mathematical statistics and statistical learning theory. As a consequence of Dudley's early work on weak convergence of probability measures on non-separable metric spaces, the Skorohod topology on the space of regulated right-continuous functions can be replaced, in the study of weak convergence of the empirical distribution function, by the supremum norm. In a further recent step Dudley replaces this norm by the stronger p-variation norms, which then allows replacing compact differentiability of many statistical functionals by Fr chet differentiability in the delta method. Richard M. Dudley has also made important contributions to mathematical statistics, the theory of weak convergence, relativistic Markov processes, differentiability of nonlinear operators and several other areas of mathematics. Professor Dudley has been the adviser to thirty PhD's and is a Professor of Mathematics at the Massachusetts Institute of Technology.
For upper-level to graduate courses in Probability or Probability and Statistics, for majors in mathematics, statistics, engineering, and the sciences. Explores both the mathematics and the many potential applications of probability theory A First Course in Probability offers an elementary introduction to the theory of probability for students in mathematics, statistics, engineering, and the sciences. Through clear and intuitive explanations, it attempts to present not only the mathematics of probability theory, but also the many diverse possible applications of this subject through numerous examples. The 10th Edition includes many new and updated problems, exercises, and text material chosen both for inherent interest and for use in building student intuition about probability.
This second edition of "A Beginner's Guide to Finite Mathematics" takes a distinctly applied approach to finite mathematics at the freshman and sophomore level. Topics are presented sequentially: the book opens with a brief review of sets and numbers, followed by an introduction to data sets, histograms, means and medians. Counting techniques and the Binomial Theorem are covered, which provides the foundation for elementary probability theory; this, in turn, leads to basic statistics. This new edition includes chapters on game theory and financial mathematics. Requiring little mathematical background beyond high school algebra, the text will be especially useful for business and liberal arts majors.
Sample Sizes for Clinical Trials, Second Edition is a practical book that assists researchers in their estimation of the sample size for clinical trials. Throughout the book there are detailed worked examples to illustrate both how to do the calculations and how to present them to colleagues or in protocols. The book also highlights some of the pitfalls in calculations as well as the key steps that lead to the final sample size calculation. Features: Comprehensive coverage of sample size calculations, including Normal, binary, ordinal, and survival outcome data Covers superiority, equivalence, non-inferiority, bioequivalence and precision objectives for both parallel group and crossover designs Highlights how trial objectives impact the study design with respect to both the derivation of sample formulae and the size of the study Motivated with examples of real-life clinical trials showing how the calculations can be applied New edition is extended with all chapters revised, some substantially, and four completely new chapters on multiplicity, cluster trials, pilot studies, and single arm trials The book is primarily aimed at researchers and practitioners of clinical trials and biostatistics, and could be used to teach a course on sample size calculations. The importance of a sample size calculation when designing a clinical trial is highlighted in the book. It enables readers to quickly find an appropriate sample size formula, with an associated worked example, complemented by tables to assist in the calculations.
Written by one of the masters of the foundation of measurement,
Louis Narens' new book thoroughly examines the basis for the
measurement-theoretic concept of meaningfulness and presents a new
theory about the role of numbers and invariance in science. The
book associates with each portion of mathematical science a subject
matter that the portion of science is intended to investigate or
describe. It considers those quantitative or empirical assertions
and relationships that belong to the subject matter to be
meaningful (for that portion of science) and those that do not
belong to be meaningless.
The main purpose of this handbook is to summarize and to put in order the ideas, methods, results and literature on the theory of random evolutions and their applications to the evolutionary stochastic systems in random media, and also to present some new trends in the theory of random evolutions and their applications. In physical language, a random evolution ( RE ) is a model for a dynamical sys tem whose state of evolution is subject to random variations. Such systems arise in all branches of science. For example, random Hamiltonian and Schrodinger equations with random potential in quantum mechanics, Maxwell's equation with a random refractive index in electrodynamics, transport equations associated with the trajec tory of a particle whose speed and direction change at random, etc. There are the examples of a single abstract situation in which an evolving system changes its "mode of evolution" or "law of motion" because of random changes of the "environment" or in a "medium." So, in mathematical language, a RE is a solution of stochastic operator integral equations in a Banach space. The operator coefficients of such equations depend on random parameters. Of course, in such generality, our equation includes any homogeneous linear evolving system. Particular examples of such equations were studied in physical applications many years ago. A general mathematical theory of such equations has been developed since 1969, the Theory of Random Evolutions."
This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.
Probabilistic modeling represents a subject arising in many branches of mathematics, economics, and computer science. Such modeling connects pure mathematics with applied sciences. Similarly, data analyzing and statistics are situated on the border between pure mathematics and applied sciences. Therefore, when probabilistic modeling meets statistics, it is a very interesting occasion that has gained much research recently. With the increase of these technologies in life and work, it has become somewhat essential in the workplace to have planning, timetabling, scheduling, decision making, optimization, simulation, data analysis, and risk analysis and process modeling. However, there are still many difficulties and challenges that arrive in these sectors during the process of planning or decision making. There continues to be the need for more research on the impact of such probabilistic modeling with other approaches. Analyzing Data Through Probabilistic Modeling in Statistics is an essential reference source that builds on the available literature in the field of probabilistic modeling, statistics, operational research, planning and scheduling, data extrapolation in decision making, probabilistic interpolation and extrapolation in simulation, stochastic processes, and decision analysis. This text will provide the resources necessary for economics and management sciences and for mathematics and computer sciences. This book is ideal for interested technology developers, decision makers, mathematicians, statisticians and practitioners, stakeholders, researchers, academicians, and students looking to further their research exposure to pertinent topics in operations research and probabilistic modeling.
Analysis of Failure and Survival Data is an essential textbook for graduate-level students of survival analysis and reliability and a valuable reference for practitioners. It focuses on the many techniques that appear in popular software packages, including plotting product-limit survival curves, hazard plots, and probability plots in the context of censored data. The author integrates S-Plus and Minitab output throughout the text, along with a variety of real data sets so readers can see how the theory and methods are applied. He also incorporates exercises in each chapter that provide valuable problem-solving experience.
This book deals with estimating and testing the probability of an event. It aims at providing practitioners with refined and easy to use techniques as well as initiating a new field of research in theoretical statistics. Practical, comprehensive tables for data analysis of the experimental state of investigations are included, as well as an accompanying CD-ROM with extensive tables for measurement intervals and prediction regions for testing. Statisticians and practitioners will find this book an essential reference.
'Stats Means Business' is an introductory textbook aimed at
Business Studies students who require guidance in the area of
statistics. It minimizes technical language, provides clear
definition of key terms, and gives emphasis to interpretation
rather than technique. |
You may like...
The State, the Market and the Euro…
Stephanie A. Bell, Edward J. Nell
Hardcover
R3,404
Discovery Miles 34 040
Township Economy - People, Spaces And…
Andrew Charman, Leif Petersen, …
Paperback
(1)
|