Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
Supervision, condition-monitoring, fault detection, fault diagnosis and fault management play an increasing role for technical processes and vehicles in order to improve reliability, availability, maintenance and lifetime. For safety-related processes fault-tolerant systems with redundancy are required in order to reach comprehensive system integrity. This book is a sequel of the book Fault-Diagnosis Systems published in 2006, where the basic methods were described. After a short introduction into fault-detection and fault-diagnosis methods the book shows how these methods can be applied for a selection of 20 real technical components and processes as examples, such as: Electrical drives (DC, AC) Electrical actuators Fluidic actuators (hydraulic, pneumatic) Centrifugal and reciprocating pumps Pipelines (leak detection) Industrial robots Machine tools (main and feed drive, drilling, milling, grinding) Heat exchangers Also realized fault-tolerant systems for electrical drives, actuators and sensors are presented. The book describes why and how the various signal-model-based and process-model-based methods were applied and which experimental results could be achieved. In several cases a combination of different methods was most successful. The book is dedicated to graduate students of electrical, mechanical, chemical engineering and computer science and for engineers.
This book is intended to provide a text on statistical methods for detecting clus ters and/or clustering of health events that is of interest to ?nal year undergraduate and graduate level statistics, biostatistics, epidemiology, and geography students but will also be of relevance to public health practitioners, statisticians, biostatisticians, epidemiologists, medical geographers, human geographers, environmental scien tists, and ecologists. Prerequisites are introductory biostatistics and epidemiology courses. With increasing public health concerns about environmental risks, the need for sophisticated methods for analyzing spatial health events is immediate. Further more, the research area of statistical tests for disease clustering now attracts a wide audience due to the perceived need to implement wide ranging monitoring systems to detect possible health related bioterrorism activity. With this background and the development of the geographical information system (GIS), the analysis of disease clustering of health events has seen considerable development over the last decade. Therefore, several excellent books on spatial epidemiology and statistics have re cently been published. However, it seems to me that there is no other book solely focusing on statistical methods for disease clustering. I hope that readers will ?nd this book useful and interesting as an introduction to the subject.
This is a graduate-level textbook on Bayesian analysis blending modern Bayesian theory, methods, and applications. Starting from basic statistics, undergraduate calculus and linear algebra, ideas of both subjective and objective Bayesian analysis are developed to a level where real-life data can be analyzed using the current techniques of statistical computing. Advances in both low-dimensional and high-dimensional problems are covered, as well as important topics such as empirical Bayes and hierarchical Bayes methods and Markov chain Monte Carlo (MCMC) techniques. Many topics are at the cutting edge of statistical research. Solutions to common inference problems appear throughout the text along with discussion of what prior to choose. There is a discussion of elicitation of a subjective prior as well as the motivation, applicability, and limitations of objective priors. By way of important applications the book presents microarrays, nonparametric regression via wavelets as well as DMA mixtures of normals, and spatial analysis with illustrations using simulated and real data. Theoretical topics at the cutting edge include high-dimensional model selection and Intrinsic Bayes Factors, which the authors have successfully applied to geological mapping. The style is informal but clear. Asymptotics is used to supplement simulation or understand some aspects of the posterior.
Highly praised for its exceptional clarity, conversational style and useful examples, Introductory Business Statistics, 7e, International Edition was written specifically for you. This proven, popular text cuts through the jargon to help you understand fundamental statistical concepts and why they are important to you, your world, and your career. The text's outstanding illustrations, friendly language, non-technical terminology, and current, real-world examples will capture your interest and prepare you for success right from the start.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
This book presents a treatise on the theory and modeling of second-order stationary processes, including an exposition on selected application areas that are important in the engineering and applied sciences. The foundational issues regarding stationary processes dealt with in the beginning of the book have a long history, starting in the 1940s with the work of Kolmogorov, Wiener, Cramer and his students, in particular Wold, and have since been refined and complemented by many others. Problems concerning the filtering and modeling of stationary random signals and systems have also been addressed and studied, fostered by the advent of modern digital computers, since the fundamental work of R.E. Kalman in the early 1960s. The book offers a unified and logically consistent view of the subject based on simple ideas from Hilbert space geometry and coordinate-free thinking. In this framework, the concepts of stochastic state space and state space modeling, based on the notion of the conditional independence of past and future flows of the relevant signals, are revealed to be fundamentally unifying ideas. The book, based on over 30 years of original research, represents a valuable contribution that will inform the fields of stochastic modeling, estimation, system identification, and time series analysis for decades to come. It also provides the mathematical tools needed to grasp and analyze the structures of algorithms in stochastic systems theory.
An incomparably useful examination of statistical methods for comparison The nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not independent of the outcome. Statistical Group Comparison brings together a broad range of statistical methods for comparison developed over recent years. The book covers a wide spectrum of topics from the simplest comparison of two means or rates to more recently developed statistics including double generalized linear models and Bayesian as well as hierarchical methods. Coverage includes:
Examples are drawn from the social, political, economic, and biomedical sciences; many can be implemented using widely available software. Because of the range and the generality of the statistical methods covered, researchers across many disciplines–beyond the social, political, economic, and biomedical sciences–will find the book a convenient reference for many a research situation where comparisons may come naturally.
"a ]the author has packaged an excellent and modern set of topics around the development and use of quantitative models.... If you need to learn about resampling, this book would be a good place to start." a "Technometrics (Review of the Second Edition) This thoroughly revised and expanded third edition is a practical guide to data analysis using the bootstrap, cross-validation, and permutation tests. Only requiring minimal mathematics beyond algebra, the book provides a table-free introduction to data analysis utilizing numerous exercises, practical data sets, and freely available statistical shareware. Topics and Features * Practical presentation covers both the bootstrap and permutations along with the program code necessary to put them to work. * Includes a systematic guide to selecting the correct procedure for a particular application. * Detailed coverage of classification, estimation, experimental design, hypothesis testing, and modeling. * Suitable for both classroom use and individual self-study. New to the Third Edition * Procedures are grouped by application; a prefatory chapter guides readers to the appropriate reading matter. * Program listings and screen shots now accompany each resampling procedure: Whether one programs in C++, CART, Blossom, Box Sampler (an Excel add-in), EViews, MATLAB, R, Resampling Stats, SAS macros, S-PLUS, Stata, or StatXact, readers will find the program listings and screen shots needed to put each resampling procedure into practice. * To simplify programming, code for readers to download and apply is posted at http: //www.springeronline.com/0-8176-4386-9. * Notation has beensimplified and, where possible, eliminated. * A glossary and answers to selected exercises are included. With its accessible style and intuitive topic development, the book is an excellent basic resource for the power, simplicity, and versatility of resampling methods. It is an essential resource for statisticians, biostatisticians, statistical consultants, students, and research professionals in the biological, physical, and social sciences, engineering, and technology.
This new edition offers a comprehensive introduction to the analysis of data using Bayes rule. It generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. This is particularly useful when the observed parameter is barely above the background or the histogram of multiparametric data contains many empty bins, so that the determination of the validity of a theory cannot be based on the chi-squared-criterion. In addition to the solutions of practical problems, this approach provides an epistemic insight: the logic of quantum mechanics is obtained as the logic of unbiased inference from counting data. New sections feature factorizing parameters, commuting parameters, observables in quantum mechanics, the art of fitting with coherent and with incoherent alternatives and fitting with multinomial distribution. Additional problems and examples help deepen the knowledge. Requiring no knowledge of quantum mechanics, the book is written on introductory level, with many examples and exercises, for advanced undergraduate and graduate students in the physical sciences, planning to, or working in, fields such as medical physics, nuclear physics, quantum mechanics, and chaos.
This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12-16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statistical methods. The aim of the ICORS conference, which is being organized annually since 2001, is to bring together researchers interested in robust statistics, data analysis and related areas. The conference is meant for theoretical and applied statisticians, data analysts from other fields, leading experts, junior researchers and graduate students. The ICORS meetings offer a forum for discussing recent advances and emerging ideas in statistics with a focus on robustness, and encourage informal contacts and discussions among all the participants. They also play an important role in maintaining a cohesive group of international researchers interested in robust statistics and related topics, whose interactions transcend the meetings and endure year round.
This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.
This book contains the lectures given at the II Canference an Dynamics and Randamness held at the Centro de Modelamiento Matematico of the Universidad de Chile, from December 9th to 13th, 2002. This meeting brought together mathematicians, theoretical physicists, theoretical computer scientists, and graduate students interested in fields related to probability theory, ergodic theory, symbolic and topological dynamics. We would like to express our gratitude to an the participants of the conference and to the people who contributed to its orga- nization. In particular, to Pierre Collet, BerIiard Rost and Karl Petersen for their scientific advise. We want to thank warmly the authors of each chapter for their stimulating lectures and for their manuscripts devoted to a various of appealing subjects in probability and dynamics: to Jean Bertoin for his course on Some aspects of random fragmentation in con- tinuous time; to Anton Bovier for his course on Metastability and ageing in stochastic dynamics; to Steve Lalley for his course on AI- gebraic systems of generat ing functions and return probabilities for random walks; to Elon Lindenstrauss for his course on Recurrent measures and measure rigidity; to Sylvie Meleard for her course on Stochastic particle approximations for two-dimensional N avier- Stokes equations; and to Anatoly Vershik for his course on Random and universal metric spaces.
This edited book first consolidates the results of the EU-funded EDISON project (Education for Data Intensive Science to Open New science frontiers), which developed training material and information to assist educators, trainers, employers, and research infrastructure managers in identifying, recruiting and inspiring the data science professionals of the future. It then deepens the presentation of the information and knowledge gained to allow for easier assimilation by the reader. The contributed chapters are presented in sequence, each chapter picking up from the end point of the previous one. After the initial book and project overview, the chapters present the relevant data science competencies and body of knowledge, the model curriculum required to teach the required foundations, profiles of professionals in this domain, and use cases and applications. The text is supported with appendices on related process models. The book can be used to develop new courses in data science, evaluate existing modules and courses, draft job descriptions, and plan and design efficient data-intensive research teams across scientific disciplines.
Our intention in preparing this book was to present in as simple a manner as possible those branches of error analysis which ?nd direct applications in solving various problems in engineering practice. The main reason for writing this text was the lack of such an approach in existing books dealing with the error calculus. Most of books are devoted to mathematical statistics and to probability theory. The range of applications is usually limited to the problems of general statistics and to the analysis of errors in various measuring techniques. Much less attention is paid in these books to two-dimensional and three-dim- sional distributions, and almost no attention is given to problems connected with the two-dimensional and three-dimensional vectorial functions of independent random variables. The theory of such vectorial functions ?nds new applications connected, for example, with analysis of the positioning accuracy of various mechanisms, among them of robot manipulators and automatically controlled earth-moving and loading machines, such as excavators.
This textbook has been developed from the lecture notes for a one-semester course on stochastic modelling. It reviews the basics of probability theory and then covers the following topics: Markov chains, Markov decision processes, jump Markov processes, elements of queueing theory, basic renewal theory, elements of time series and simulation. Rigorous proofs are often replaced with sketches of arguments -- with indications as to why a particular result holds, and also how it is connected with other results -- and illustrated by examples. Wherever possible, the book includes references to more specialised texts containing both proofs and more advanced material related to the topics covered.
This textbook discusses central statistical concepts and their use in business and economics. To endure the hardship of abstract statistical thinking, business and economics students need to see interesting applications at an early stage. Accordingly, the book predominantly focuses on exercises, several of which draw on simple applications of non-linear theory. The main body presents central ideas in a simple, straightforward manner; the exposition is concise, without sacrificing rigor. The book bridges the gap between theory and applications, with most exercises formulated in an economic context. Its simplicity of style makes the book suitable for students at any level, and every chapter starts out with simple problems. Several exercises, however, are more challenging, as they are devoted to the discussion of non-trivial economic problems where statistics plays a central part.
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
This book focuses on metaheuristic methods and its applications to real-world problems in Engineering. The first part describes some key metaheuristic methods, such as Bat Algorithms, Particle Swarm Optimization, Differential Evolution, and Particle Collision Algorithms. Improved versions of these methods and strategies for parameter tuning are also presented, both of which are essential for the practical use of these important computational tools. The second part then applies metaheuristics to problems, mainly in Civil, Mechanical, Chemical, Electrical, and Nuclear Engineering. Other methods, such as the Flower Pollination Algorithm, Symbiotic Organisms Search, Cross-Entropy Algorithm, Artificial Bee Colonies, Population-Based Incremental Learning, Cuckoo Search, and Genetic Algorithms, are also presented. The book is rounded out by recently developed strategies, or hybrid improved versions of existing methods, such as the Lightning Optimization Algorithm, Differential Evolution with Particle Collisions, and Ant Colony Optimization with Dispersion - state-of-the-art approaches for the application of computational intelligence to engineering problems. The wide variety of methods and applications, as well as the original results to problems of practical engineering interest, represent the primary differentiation and distinctive quality of this book. Furthermore, it gathers contributions by authors from four countries - some of which are the original proponents of the methods presented - and 18 research centers around the globe.
The book is a comprehensive, self-contained introduction to the mathematical modeling and analysis of disease transmission models. It includes (i) an introduction to the main concepts of compartmental models including models with heterogeneous mixing of individuals and models for vector-transmitted diseases, (ii) a detailed analysis of models for important specific diseases, including tuberculosis, HIV/AIDS, influenza, Ebola virus disease, malaria, dengue fever and the Zika virus, (iii) an introduction to more advanced mathematical topics, including age structure, spatial structure, and mobility, and (iv) some challenges and opportunities for the future. There are exercises of varying degrees of difficulty, and projects leading to new research directions. For the benefit of public health professionals whose contact with mathematics may not be recent, there is an appendix covering the necessary mathematical background. There are indications which sections require a strong mathematical background so that the book can be useful for both mathematical modelers and public health professionals.
This book compiles and critically discusses modern engineering system degradation models and their impact on engineering decisions. In particular, the authors focus on modeling the uncertain nature of degradation considering both conceptual discussions and formal mathematical formulations. It also describes the basics concepts and the various modeling aspects of life-cycle analysis (LCA). It highlights the role of degradation in LCA and defines optimum design and operation parameters. Given the relationship between operational decisions and the performance of the system's condition over time, maintenance models are also discussed. The concepts and models presented have applications in a large variety of engineering fields such as Civil, Environmental, Industrial, Electrical and Mechanical engineering. However, special emphasis is given to problems related to large infrastructure systems. The book is intended to be used both as a reference resource for researchers and practitioners and as an academic text for courses related to risk and reliability, infrastructure performance modeling and life-cycle assessment.
Econometric theory, as presented in textbooks and the econometric literature generally, is a somewhat disparate collection of findings. Its essential nature is to be a set of demonstrated results that increase over time, each logically based on a specific set of axioms or assumptions, yet at every moment, rather than a finished work, these inevitably form an incomplete body of knowledge. The practice of econometric theory consists of selecting from, applying, and evaluating this literature, so as to test its applicability and range. The creation, development, and use of computer software has led applied economic research into a new age. This book describes the history of econometric computation from 1950 to the present day, based upon an interactive survey involving the collaboration of the many econometricians who have designed and developed this software. It identifies each of the econometric software packages that are made available to and used by economists and econometricians worldwide.
Heterogeneity, or mixtures, are ubiquitous in genetics. Even for data as simple as mono-genic diseases, populations are a mixture of affected and unaffected individuals. Still, most statistical genetic association analyses, designed to map genes for diseases and other genetic traits, ignore this phenomenon. In this book, we document methods that incorporate heterogeneity into the design and analysis of genetic and genomic association data. Among the key qualities of our developed statistics is that they include mixture parameters as part of the statistic, a unique component for tests of association. A critical feature of this work is the inclusion of at least one heterogeneity parameter when performing statistical power and sample size calculations for tests of genetic association. We anticipate that this book will be useful to researchers who want to estimate heterogeneity in their data, develop or apply genetic association statistics where heterogeneity exists, and accurately evaluate statistical power and sample size for genetic association through the application of robust experimental design.
"Statistical Analysis of Management Data" provides a comprehensive approach to multivariate statistical analyses that are important for researchers in all fields of management, including finance, production, accounting, marketing, strategy, technology, and human resources. This book is especially designed to provide doctoral students with a theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications. It offers a clear, succinct exposition of each technique with emphasis on when each technique is appropriate and how to use it. This second edition, fully revised, updated, and expanded, reflects the most current evolution in the methods for data analysis in management and the social sciences. In particular, it places a greater emphasis on measurement models, and includes new chapters and sections on: confirmatory factor analysis canonical correlation analysis cluster analysis analysis of covariance structure multi-group confirmatory factor analysis and analysis of covariance structures. Featuring numerous examples, the book may serve as an advanced text or as a resource for applied researchers in industry who want to understand the foundations of the methods and to learn how they can be applied using widely available statistical software.
Provides in an organized manner characterizations of univariate probability distributions with many new results published in this area since the 1978 work of Golambos & Kotz "Characterizations of Probability Distributions" (Springer), together with applications of the theory in model fitting and predictions. |
You may like...
Fatal Numbers: Why Count on Chance
Hans Magnus Enzensberger
Paperback
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
R2,342 Discovery Miles 23 420
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|