![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book commemorates the scientific contributions of distinguished statistician, Andrei Yakovlev. It reflects upon Dr. Yakovlev's many research interests including stochastic modeling and the analysis of micro-array data, and throughout the book it emphasizes applications of the theory in biology, medicine and public health. The contributions to this volume are divided into two parts. Part A consists of original research articles, which can be roughly grouped into four thematic areas: (i) branching processes, especially as models for cell kinetics, (ii) multiple testing issues as they arise in the analysis of biologic data, (iii) applications of mathematical models and of new inferential techniques in epidemiology, and (iv) contributions to statistical methodology, with an emphasis on the modeling and analysis of survival time data. Part B consists of methodological research reported as a short communication, ending with some personal reflections on research fields associated with Andrei and on his approach to science. The Appendix contains an abbreviated vitae and a list of Andrei's publications, complete as far as we know. The contributions in this book are written by Dr. Yakovlev's collaborators and notable statisticians including former presidents of the Institute of Mathematical Statistics and of the Statistics Section of the AAAS. Dr. Yakovlev's research appeared in four books and almost 200 scientific papers, in mathematics, statistics, biomathematics and biology journals. Ultimately this book offers a tribute to Dr. Yakovlev's work and recognizes the legacy of his contributions in the biostatistics community.
This book covers several new areas in the growing field of analytics with some innovative applications in different business contexts, and consists of selected presentations at the 6th IIMA International Conference on Advanced Data Analysis, Business Analytics and Intelligence. The book is conceptually divided in seven parts. The first part gives expository briefs on some topics of current academic and practitioner interests, such as data streams, binary prediction and reliability shock models. In the second part, the contributions look at artificial intelligence applications with chapters related to explainable AI, personalized search and recommendation, and customer retention management. The third part deals with credit risk analytics, with chapters on optimization of credit limits and mitigation of agricultural lending risks. In its fourth part, the book explores analytics and data mining in the retail context. In the fifth part, the book presents some applications of analytics to operations management. This part has chapters related to improvement of furnace operations, forecasting food indices and analytics for improving student learning outcomes. The sixth part has contributions related to adaptive designs in clinical trials, stochastic comparisons of systems with heterogeneous components and stacking of models. The seventh and final part contains chapters related to finance and economics topics, such as role of infrastructure and taxation on economic growth of countries and connectedness of markets with heterogenous agents, The different themes ensure that the book would be of great value to practitioners, post-graduate students, research scholars and faculty teaching advanced business analytics courses.
To cope with the new running conditions in the ALICE experiment at the Large Hadron Collider at CERN, a new integrated circuit named SAMPA has been created that can process 32 analogue channels, convert them to digital, perform filtering and compression, and transmit the data on high speed links to the data acquisition system. The main purpose of this work is to specify, design, test and verify the digital signal processing part of the SAMPA device to accommodate the requirements of the detectors involved. Innovative solutions have been employed to reduce the bandwidth required by the detectors, as well as adaptations to ease data handling later in the processing chain. The new SAMPA device was built to replace two existing circuits, in addition to reducing the current consumption, and doubling the amount of processing channels. About 50000 of the devices will be installed in the Time Projection Chamber and Muon Chamber detectors in the ALICE experiment.
This proceedings volume highlights the latest research and developments in psychometrics and statistics. It represents selected and peer reviewed presentations given at the 84th Annual International Meeting of the Psychometric Society (IMPS), organized by Pontificia Universidad Catolica de Chile and held in Santiago, Chile during July 15th to 19th, 2019. The IMPS is one of the largest international meetings on quantitative measurement in education, psychology and the social sciences. It draws approximately 500 participants from around the world, featuring paper and poster presentations, symposiums, workshops, keynotes, and invited presentations. Leading experts and promising young researchers have written the included chapters. The chapters address a large variety of topics including but not limited to item response theory, multistage adaptive testing, and cognitive diagnostic models. This volume is the 8th in a series of recent volumes to cover research presented at the IMPS.
This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.
This book is comprised of presentations delivered at the 5th Workshop on Biostatistics and Bioinformatics held in Atlanta on May 5-7, 2017. Featuring twenty-two selected papers from the workshop, this book showcases the most current advances in the field, presenting new methods, theories, and case applications at the frontiers of biostatistics, bioinformatics, and interdisciplinary areas. Biostatistics and bioinformatics have been playing a key role in statistics and other scientific research fields in recent years. The goal of the 5th Workshop on Biostatistics and Bioinformatics was to stimulate research, foster interaction among researchers in field, and offer opportunities for learning and facilitating research collaborations in the era of big data. The resulting volume offers timely insights for researchers, students, and industry practitioners.
This book provides insights into important new developments in the area of statistical quality control and critically discusses methods used in on-line and off-line statistical quality control. The book is divided into three parts: Part I covers statistical process control, Part II deals with design of experiments, while Part III focuses on fields such as reliability theory and data quality. The 12th International Workshop on Intelligent Statistical Quality Control (Hamburg, Germany, August 16 - 19, 2016) was jointly organized by Professors Sven Knoth and Wolfgang Schmid. The contributions presented in this volume were carefully selected and reviewed by the conference's scientific program committee. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of quality control.
This contributed book focuses on major aspects of statistical quality control, shares insights into important new developments in the field, and adapts established statistical quality control methods for use in e.g. big data, network analysis and medical applications. The content is divided into two parts, the first of which mainly addresses statistical process control, also known as statistical process monitoring. In turn, the second part explores selected topics in statistical quality control, including measurement uncertainty analysis and data quality. The peer-reviewed contributions gathered here were originally presented at the 13th International Workshop on Intelligent Statistical Quality Control, ISQC 2019, held in Hong Kong on August 12-14, 2019. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of statistical quality control.
Statistics is arguably the main means through which maths appears in non-maths courses. So many students across a broad range of disciplines encounter statistics, in most cases unexpectedly so, and will need to brush up their skills in order to research, analyse and present their data effectively. Topics such as such as methods of presentation, distributions, confidence limits and so on appear very often - and almost every course involves analysis of data at some point. De-mystifying the basics for even the most maths-terrified of students, this book will inspire confident and accurate use of statistics for non-maths courses.
This book brings together carefully selected, peer-reviewed works on mathematical biology presented at the BIOMAT International Symposium on Mathematical and Computational Biology, which was held at the Institute of Numerical Mathematics, Russian Academy of Sciences, in October 2017, in Moscow. Topics covered include, but are not limited to, the evolution of spatial patterns on metapopulations, problems related to cardiovascular diseases and modeled by boundary control techniques in hemodynamics, algebraic modeling of the genetic code, and multi-step biochemical pathways. Also, new results are presented on topics like pattern recognition of probability distribution of amino acids, somitogenesis through reaction-diffusion models, mathematical modeling of infectious diseases, and many others. Experts, scientific practitioners, graduate students and professionals working in various interdisciplinary fields will find this book a rich resource for research and applications alike.
This book presents the fundamentals of data assimilation and reviews the application of satellite remote sensing in hydrological data assimilation. Although hydrological models are valuable tools to monitor and understand global and regional water cycles, they are subject to various sources of errors. Satellite remote sensing data provides a great opportunity to improve the performance of models through data assimilation.
This book presents a general framework analysis of sovereignty in blockchain based on the concept of blockchain technology, and specifically discusses the three theoretical foundations of sovereignty in blockchain: data sovereignty theory, social trust theory, and smart contract theory. It also explores the evolution of laws concerning data and digital rights, how to build trust mechanisms for digital rights transactions, as well as contract signing and the implementation of digital rights transactions.
In today's industrial and complex world, the progress of change is incredible. The amount of information which needs to be analyzed is very large and time has become more and more limited. Industries and firms of all sizes desire to increase productivity and sustainability to keep their competitive edge in the marketplace. One of the best tools for achieving this is the application of Quality Engineering Techniques (QET). This book will introduce the integrated model and the numerical applications for implementing it.
Richard De Veaux, Paul Velleman, and David Bock wrote Intro Stats with the goal that you have as much fun reading it as they did in writing it. Maintaining a conversational, humorous, and informal writing style, this new edition engages readers from the first page. The authors focus on statistical thinking throughout the text and rely on technology for calculations. As a result, students can focus on developing their conceptual understanding. Innovative Think/Show/Tell examples provide a problem-solving framework and, more importantly, a way to think through any statistics problem and present their results. New to the Fourth Edition is a streamlined presentation that keeps students focused on what's most important, while including out helpful features. An updated organization divides chapters into sections, with specific learning objectives to keep students on track. A detailed table of contents assists with navigation through this new layout. Single-concept exercises complement the existing mid- to hard-level exercises for basic skill development.
This book helps students, researchers, and practicing engineers to understand the theoretical framework of control and system theory for discrete-time stochastic systems so that they can then apply its principles to their own stochastic control systems and to the solution of control, filtering, and realization problems for such systems. Applications of the theory in the book include the control of ships, shock absorbers, traffic and communications networks, and power systems with fluctuating power flows. The focus of the book is a stochastic control system defined for a spectrum of probability distributions including Bernoulli, finite, Poisson, beta, gamma, and Gaussian distributions. The concepts of observability and controllability of a stochastic control system are defined and characterized. Each output process considered is, with respect to conditions, represented by a stochastic system called a stochastic realization. The existence of a control law is related to stochastic controllability while the existence of a filter system is related to stochastic observability. Stochastic control with partial observations is based on the existence of a stochastic realization of the filtration of the observed process.
This textbook presents a unified and rigorous approach to best linear unbiased estimation and prediction of parameters and random quantities in linear models, as well as other theory upon which much of the statistical methodology associated with linear models is based. The single most unique feature of the book is that each major concept or result is illustrated with one or more concrete examples or special cases. Commonly used methodologies based on the theory are presented in methodological interludes scattered throughout the book, along with a wealth of exercises that will benefit students and instructors alike. Generalized inverses are used throughout, so that the model matrix and various other matrices are not required to have full rank. Considerably more emphasis is given to estimability, partitioned analyses of variance, constrained least squares, effects of model misspecification, and most especially prediction than in many other textbooks on linear models. This book is intended for master and PhD students with a basic grasp of statistical theory, matrix algebra and applied regression analysis, and for instructors of linear models courses. Solutions to the book's exercises are available in the companion volume Linear Model Theory - Exercises and Solutions by the same author.
This thesis presents the application of non-perturbative, or functional, renormalization group to study the physics of critical stationary states in systems out-of-equilibrium. Two different systems are thereby studied. The first system is the diffusive epidemic process, a stochastic process which models the propagation of an epidemic within a population. This model exhibits a phase transition peculiar to out-of-equilibrium, between a stationary state where the epidemic is extinct and one where it survives. The present study helps to clarify subtle issues about the underlying symmetries of this process and the possible universality classes of its phase transition. The second system is fully developed homogeneous isotropic and incompressible turbulence. The stationary state of this driven-dissipative system shows an energy cascade whose phenomenology is complex, with partial scale-invariance, intertwined with what is called intermittency. In this work, analytical expressions for the space-time dependence of multi-point correlation functions of the turbulent state in 2- and 3-D are derived. This result is noteworthy in that it does not rely on phenomenological input except from the Navier-Stokes equation and that it becomes exact in the physically relevant limit of large wave-numbers. The obtained correlation functions show how scale invariance is broken in a subtle way, related to intermittency corrections.
This open access book demonstrates how data quality issues affect all surveys and proposes methods that can be utilised to deal with the observable components of survey error in a statistically sound manner. This book begins by profiling the post-Apartheid period in South Africa's history when the sampling frame and survey methodology for household surveys was undergoing periodic changes due to the changing geopolitical landscape in the country. This book profiles how different components of error had disproportionate magnitudes in different survey years, including coverage error, sampling error, nonresponse error, measurement error, processing error and adjustment error. The parameters of interest concern the earnings distribution, but despite this outcome of interest, the discussion is generalizable to any question in a random sample survey of households or firms. This book then investigates questionnaire design and item nonresponse by building a response propensity model for the employee income question in two South African labour market surveys: the October Household Survey (OHS, 1997-1999) and the Labour Force Survey (LFS, 2000-2003). This time period isolates a period of changing questionnaire design for the income question. Finally, this book is concerned with how to employee income data with a mixture of continuous data, bounded response data and nonresponse. A variable with this mixture of data types is called coarse data. Because the income question consists of two parts -- an initial, exact income question and a bounded income follow-up question -- the resulting statistical distribution of employee income is both continuous and discrete. The book shows researchers how to appropriately deal with coarse income data using multiple imputation. The take-home message from this book is that researchers have a responsibility to treat data quality concerns in a statistically sound manner, rather than making adjustments to public-use data in arbitrary ways, often underpinned by undefensible assumptions about an implicit unobservable loss function in the data. The demonstration of how this can be done provides a replicable concept map with applicable methods that can be utilised in any sample survey.
Expert practical and theoretical coverage of runs and scans This volume presents both theoretical and applied aspects of runs and scans, and illustrates their important role in reliability analysis through various applications from science and engineering. Runs and Scans with Applications presents new and exciting content in a systematic and cohesive way in a single comprehensive volume, complete with relevant approximations and explanations of some limit theorems. The authors provide detailed discussions of both classical and current problems, such as:
Runs and Scans with Applications offers broad coverage of the subject in the context of reliability and life-testing settings and serves as an authoritative reference for students and professionals alike.
This book is a collection of conference proceedings mainly concerned with the problem class of nonlinear transport/diffusion/reaction systems, chief amongst these being the Navier-Stokes equations, porous-media flow problems and semiconductor-device equations. Of particular interest are unsolved problems which challenge open questions from applications and assess the various numerous methods used to treat them. A fundamental aim is to raise the overall awareness of a broad range of topical issues in scientific computing and numerical analysis, including multispecies/multiphysics problems, discretisation methods for nonlinear systems, mesh generation, adaptivity, linear algebraic solvers and preconditioners, and portable parallelisation.
Adequate health and health care is no longer possible without proper data supervision from modern machine learning methodologies like cluster models, neural networks, and other data mining methodologies. The current book is the first publication of a complete overview of machine learning methodologies for the medical and health sector, and it was written as a training companion, and as a must-read, not only for physicians and students, but also for any one involved in the process and progress of health and health care. In this second edition the authors have removed the textual errors from the first edition. Also, the improved tables from the first edition, have been replaced with the original tables from the software programs as applied. This is, because, unlike the former, the latter were without error, and readers were better familiar with them. The main purpose of the first edition was, to provide stepwise analyses of the novel methods from data examples, but background information and clinical relevance information may have been somewhat lacking. Therefore, each chapter now contains a section entitled "Background Information". Machine learning may be more informative, and may provide better sensitivity of testing than traditional analytic methods may do. In the second edition a place has been given for the use of machine learning not only to the analysis of observational clinical data, but also to that of controlled clinical trials. Unlike the first edition, the second edition has drawings in full color providing a helpful extra dimension to the data analysis. Several machine learning methodologies not yet covered in the first edition, but increasingly important today, have been included in this updated edition, for example, negative binomial and Poisson regressions, sparse canonical analysis, Firth's bias adjusted logistic analysis, omics research, eigenvalues and eigenvectors.
This book is dedicated to the systematization and development of models, methods, and algorithms for queuing systems with correlated arrivals. After first setting up the basic tools needed for the study of queuing theory, the authors concentrate on complicated systems: multi-server systems with phase type distribution of service time or single-server queues with arbitrary distribution of service time or semi-Markovian service. They pay special attention to practically important retrial queues, tandem queues, and queues with unreliable servers. Mathematical models of networks and queuing systems are widely used for the study and optimization of various technical, physical, economic, industrial, and administrative systems, and this book will be valuable for researchers, graduate students, and practitioners in these domains.
Robust Integration of Model-Based Fault Estimation and Fault-Tolerant Control is a systematic examination of methods used to overcome the inevitable system uncertainties arising when a fault estimation (FE) function and a fault-tolerant controller interact as they are employed together to compensate for system faults and maintain robustly acceptable system performance. It covers the important subject of robust integration of FE and FTC with the aim of guaranteeing closed-loop stability. The reader's understanding of the theory is supported by the extensive use of tutorial examples, including some MATLAB (R)-based material available from the Springer website and by industrial-applications-based material. The text is structured into three parts: Part I examines the basic concepts of FE and FTC, providing extensive insight into the importance of and challenges involved in their integration; Part II describes five effective strategies for the integration of FE and FTC: sequential, iterative, simultaneous, adaptive-decoupling, and robust decoupling; and Part III begins to extend the proposed strategies to nonlinear and large-scale systems and covers their application in the fields of renewable energy, robotics and networked systems. The strategies presented are applicable to a broad range of control problems, because in the absence of faults the FE-based FTC naturally reverts to conventional observer-based control. The book is a useful resource for researchers and engineers working in the area of fault-tolerant control systems, and supplementary material for a graduate- or postgraduate-level course on fault diagnosis and FTC. Advances in Industrial Control reports and encourages the transfer of technology in control engineering. The rapid development of control technology has an impact on all areas of the control discipline. The series offers an opportunity for researchers to present an extended exposition of new work in all aspects of industrial control. |
![]() ![]() You may like...
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
The Oxford Handbook of Functional Data…
Frederic Ferraty, Yves Romain
Hardcover
R4,889
Discovery Miles 48 890
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
![]() R1,305 Discovery Miles 13 050
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Probability & Statistics for Engineers…
Ronald Walpole, Raymond Myers, …
Paperback
R2,759
Discovery Miles 27 590
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,795
Discovery Miles 27 950
|