![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This thesis presents the application of non-perturbative, or functional, renormalization group to study the physics of critical stationary states in systems out-of-equilibrium. Two different systems are thereby studied. The first system is the diffusive epidemic process, a stochastic process which models the propagation of an epidemic within a population. This model exhibits a phase transition peculiar to out-of-equilibrium, between a stationary state where the epidemic is extinct and one where it survives. The present study helps to clarify subtle issues about the underlying symmetries of this process and the possible universality classes of its phase transition. The second system is fully developed homogeneous isotropic and incompressible turbulence. The stationary state of this driven-dissipative system shows an energy cascade whose phenomenology is complex, with partial scale-invariance, intertwined with what is called intermittency. In this work, analytical expressions for the space-time dependence of multi-point correlation functions of the turbulent state in 2- and 3-D are derived. This result is noteworthy in that it does not rely on phenomenological input except from the Navier-Stokes equation and that it becomes exact in the physically relevant limit of large wave-numbers. The obtained correlation functions show how scale invariance is broken in a subtle way, related to intermittency corrections.
This open access book demonstrates how data quality issues affect all surveys and proposes methods that can be utilised to deal with the observable components of survey error in a statistically sound manner. This book begins by profiling the post-Apartheid period in South Africa's history when the sampling frame and survey methodology for household surveys was undergoing periodic changes due to the changing geopolitical landscape in the country. This book profiles how different components of error had disproportionate magnitudes in different survey years, including coverage error, sampling error, nonresponse error, measurement error, processing error and adjustment error. The parameters of interest concern the earnings distribution, but despite this outcome of interest, the discussion is generalizable to any question in a random sample survey of households or firms. This book then investigates questionnaire design and item nonresponse by building a response propensity model for the employee income question in two South African labour market surveys: the October Household Survey (OHS, 1997-1999) and the Labour Force Survey (LFS, 2000-2003). This time period isolates a period of changing questionnaire design for the income question. Finally, this book is concerned with how to employee income data with a mixture of continuous data, bounded response data and nonresponse. A variable with this mixture of data types is called coarse data. Because the income question consists of two parts -- an initial, exact income question and a bounded income follow-up question -- the resulting statistical distribution of employee income is both continuous and discrete. The book shows researchers how to appropriately deal with coarse income data using multiple imputation. The take-home message from this book is that researchers have a responsibility to treat data quality concerns in a statistically sound manner, rather than making adjustments to public-use data in arbitrary ways, often underpinned by undefensible assumptions about an implicit unobservable loss function in the data. The demonstration of how this can be done provides a replicable concept map with applicable methods that can be utilised in any sample survey.
This book is a collection of conference proceedings mainly concerned with the problem class of nonlinear transport/diffusion/reaction systems, chief amongst these being the Navier-Stokes equations, porous-media flow problems and semiconductor-device equations. Of particular interest are unsolved problems which challenge open questions from applications and assess the various numerous methods used to treat them. A fundamental aim is to raise the overall awareness of a broad range of topical issues in scientific computing and numerical analysis, including multispecies/multiphysics problems, discretisation methods for nonlinear systems, mesh generation, adaptivity, linear algebraic solvers and preconditioners, and portable parallelisation.
Adequate health and health care is no longer possible without proper data supervision from modern machine learning methodologies like cluster models, neural networks, and other data mining methodologies. The current book is the first publication of a complete overview of machine learning methodologies for the medical and health sector, and it was written as a training companion, and as a must-read, not only for physicians and students, but also for any one involved in the process and progress of health and health care. In this second edition the authors have removed the textual errors from the first edition. Also, the improved tables from the first edition, have been replaced with the original tables from the software programs as applied. This is, because, unlike the former, the latter were without error, and readers were better familiar with them. The main purpose of the first edition was, to provide stepwise analyses of the novel methods from data examples, but background information and clinical relevance information may have been somewhat lacking. Therefore, each chapter now contains a section entitled "Background Information". Machine learning may be more informative, and may provide better sensitivity of testing than traditional analytic methods may do. In the second edition a place has been given for the use of machine learning not only to the analysis of observational clinical data, but also to that of controlled clinical trials. Unlike the first edition, the second edition has drawings in full color providing a helpful extra dimension to the data analysis. Several machine learning methodologies not yet covered in the first edition, but increasingly important today, have been included in this updated edition, for example, negative binomial and Poisson regressions, sparse canonical analysis, Firth's bias adjusted logistic analysis, omics research, eigenvalues and eigenvectors.
Expert practical and theoretical coverage of runs and scans This volume presents both theoretical and applied aspects of runs and scans, and illustrates their important role in reliability analysis through various applications from science and engineering. Runs and Scans with Applications presents new and exciting content in a systematic and cohesive way in a single comprehensive volume, complete with relevant approximations and explanations of some limit theorems. The authors provide detailed discussions of both classical and current problems, such as:
Runs and Scans with Applications offers broad coverage of the subject in the context of reliability and life-testing settings and serves as an authoritative reference for students and professionals alike.
This book is dedicated to the systematization and development of models, methods, and algorithms for queuing systems with correlated arrivals. After first setting up the basic tools needed for the study of queuing theory, the authors concentrate on complicated systems: multi-server systems with phase type distribution of service time or single-server queues with arbitrary distribution of service time or semi-Markovian service. They pay special attention to practically important retrial queues, tandem queues, and queues with unreliable servers. Mathematical models of networks and queuing systems are widely used for the study and optimization of various technical, physical, economic, industrial, and administrative systems, and this book will be valuable for researchers, graduate students, and practitioners in these domains.
Robust Integration of Model-Based Fault Estimation and Fault-Tolerant Control is a systematic examination of methods used to overcome the inevitable system uncertainties arising when a fault estimation (FE) function and a fault-tolerant controller interact as they are employed together to compensate for system faults and maintain robustly acceptable system performance. It covers the important subject of robust integration of FE and FTC with the aim of guaranteeing closed-loop stability. The reader's understanding of the theory is supported by the extensive use of tutorial examples, including some MATLAB (R)-based material available from the Springer website and by industrial-applications-based material. The text is structured into three parts: Part I examines the basic concepts of FE and FTC, providing extensive insight into the importance of and challenges involved in their integration; Part II describes five effective strategies for the integration of FE and FTC: sequential, iterative, simultaneous, adaptive-decoupling, and robust decoupling; and Part III begins to extend the proposed strategies to nonlinear and large-scale systems and covers their application in the fields of renewable energy, robotics and networked systems. The strategies presented are applicable to a broad range of control problems, because in the absence of faults the FE-based FTC naturally reverts to conventional observer-based control. The book is a useful resource for researchers and engineers working in the area of fault-tolerant control systems, and supplementary material for a graduate- or postgraduate-level course on fault diagnosis and FTC. Advances in Industrial Control reports and encourages the transfer of technology in control engineering. The rapid development of control technology has an impact on all areas of the control discipline. The series offers an opportunity for researchers to present an extended exposition of new work in all aspects of industrial control.
This book presents the Statistical Learning Theory in a detailed and easy to understand way, by using practical examples, algorithms and source codes. It can be used as a textbook in graduation or undergraduation courses, for self-learners, or as reference with respect to the main theoretical concepts of Machine Learning. Fundamental concepts of Linear Algebra and Optimization applied to Machine Learning are provided, as well as source codes in R, making the book as self-contained as possible. It starts with an introduction to Machine Learning concepts and algorithms such as the Perceptron, Multilayer Perceptron and the Distance-Weighted Nearest Neighbors with examples, in order to provide the necessary foundation so the reader is able to understand the Bias-Variance Dilemma, which is the central point of the Statistical Learning Theory. Afterwards, we introduce all assumptions and formalize the Statistical Learning Theory, allowing the practical study of different classification algorithms. Then, we proceed with concentration inequalities until arriving to the Generalization and the Large-Margin bounds, providing the main motivations for the Support Vector Machines. From that, we introduce all necessary optimization concepts related to the implementation of Support Vector Machines. To provide a next stage of development, the book finishes with a discussion on SVM kernels as a way and motivation to study data spaces and improve classification results.
This book examines application and methods to incorporating stochastic parameter variations into the optimization process to decrease expense in corrective measures. Basic types of deterministic substitute problems occurring mostly in practice involve i) minimization of the expected primary costs subject to expected recourse cost constraints (reliability constraints) and remaining deterministic constraints, e.g. box constraints, as well as ii) minimization of the expected total costs (costs of construction, design, recourse costs, etc.) subject to the remaining deterministic constraints. After an introduction into the theory of dynamic control systems with random parameters, the major control laws are described, as open-loop control, closed-loop, feedback control and open-loop feedback control, used for iterative construction of feedback controls. For approximate solution of optimization and control problems with random parameters and involving expected cost/loss-type objective, constraint functions, Taylor expansion procedures, and Homotopy methods are considered, Examples and applications to stochastic optimization of regulators are given. Moreover, for reliability-based analysis and optimal design problems, corresponding optimization-based limit state functions are constructed. Because of the complexity of concrete optimization/control problems and their lack of the mathematical regularity as required of Mathematical Programming (MP) techniques, other optimization techniques, like random search methods (RSM) became increasingly important. Basic results on the convergence and convergence rates of random search methods are presented. Moreover, for the improvement of the - sometimes very low - convergence rate of RSM, search methods based on optimal stochastic decision processes are presented. In order to improve the convergence behavior of RSM, the random search procedure is embedded into a stochastic decision process for an optimal control of the probability distributions of the search variates (mutation random variables).
There has been extensive research in the past twenty years devoted to a better understanding of the stable and other closely related infinitely divisible models. The late Professor Stamatis Cambanis, a distinguished educator and researcher, played a special leadership role in the development of these fields from the early seventies until his untimely death in April 1995. This commemorative volume honoring Stamatis Cambanis consists of a collection of research articles devoted to review the state of the art in rapidly developing research areas in Stochastic Processes and to explore new directions of research. The volume is a tribute to the life and work of Stamatis by his students, friends, and colleagues whose personal and professional lives he deeply touched through his generous insights and dedication to his profession.
This book focuses on recent advances, approaches, theories and applications related to mixture models. In particular, it presents recent unsupervised and semi-supervised frameworks that consider mixture models as their main tool. The chapters considers mixture models involving several interesting and challenging problems such as parameters estimation, model selection, feature selection, etc. The goal of this book is to summarize the recent advances and modern approaches related to these problems. Each contributor presents novel research, a practical study, or novel applications based on mixture models, or a survey of the literature. Reports advances on classic problems in mixture modeling such as parameter estimation, model selection, and feature selection; Present theoretical and practical developments in mixture-based modeling and their importance in different applications; Discusses perspectives and challenging future works related to mixture modeling.
This book provides a coherent framework for understanding shrinkage estimation in statistics. The term refers to modifying a classical estimator by moving it closer to a target which could be known a priori or arise from a model. The goal is to construct estimators with improved statistical properties. The book focuses primarily on point and loss estimation of the mean vector of multivariate normal and spherically symmetric distributions. Chapter 1 reviews the statistical and decision theoretic terminology and results that will be used throughout the book. Chapter 2 is concerned with estimating the mean vector of a multivariate normal distribution under quadratic loss from a frequentist perspective. In Chapter 3 the authors take a Bayesian view of shrinkage estimation in the normal setting. Chapter 4 introduces the general classes of spherically and elliptically symmetric distributions. Point and loss estimation for these broad classes are studied in subsequent chapters. In particular, Chapter 5 extends many of the results from Chapters 2 and 3 to spherically and elliptically symmetric distributions. Chapter 6 considers the general linear model with spherically symmetric error distributions when a residual vector is available. Chapter 7 then considers the problem of estimating a location vector which is constrained to lie in a convex set. Much of the chapter is devoted to one of two types of constraint sets, balls and polyhedral cones. In Chapter 8 the authors focus on loss estimation and data-dependent evidence reports. Appendices cover a number of technical topics including weakly differentiable functions; examples where Stein's identity doesn't hold; Stein's lemma and Stokes' theorem for smooth boundaries; harmonic, superharmonic and subharmonic functions; and modified Bessel functions.
Providing a clear explanation of the fundamental theory of time
series analysis and forecasting, this book couples theory with
applications of two popular statistical packages--SAS and SPSS. The
text examines moving average, exponential smoothing, Census X-11
deseasonalization, ARIMA, intervention, transfer function, and
autoregressive error models and has brief discussions of ARCH and
GARCH models. The book features treatments of forecast improvement
with regression and autoregression combination models and model and
forecast evaluation, along with a sample size analysis for common
time series models to attain adequate statistical power. To enhance
the book's value as a teaching tool, the data sets and programs
used in the book are made available on the Academic Press Web site.
The careful linkage of the theoretical constructs with the
practical considerations involved in utilizing the statistical
packages makes it easy for the user to properly apply these
techniques.
With the diversification of Internet services and the increase in mobile users, efficient management of network resources has become an extremely important issue in the field of wireless communication networks (WCNs). Adaptive resource management is an effective tool for improving the economic efficiency of WCN systems as well as network design and construction, especially in view of the surge in mobile device demands. This book presents modelling methods based on queueing theory and Markov processes for a wide variety of WCN systems, as well as precise and approximate analytical solution methods for the numerical evaluation of the system performance. This is the first book to provide an overview of the numerical analyses that can be gleaned by applying queueing theory, traffic theory and other analytical methods to various WCN systems. It also discusses the recent advances in the resource management of WCNs, such as broadband wireless access networks, cognitive radio networks, and green cloud computing. It assumes a basic understanding of computer networks and queueing theory, and familiarity with stochastic processes is also recommended. The analysis methods presented in this book are useful for first-year-graduate or senior computer science and communication engineering students. Providing information on network design and management, performance evaluation, queueing theory, game theory, intelligent optimization, and operations research for researchers and engineers, the book is also a valuable reference resource for students, analysts, managers and anyone in the industry interested in WCN system modelling, performance analysis and numerical evaluation.
Molecular-Genetic and Statistical Techniques for Behavioral and Neural Research presents the most exciting molecular and recombinant DNA techniques used in the analysis of brain function and behavior, a critical piece of the puzzle for clinicians, scientists, course instructors and advanced undergraduate and graduate students. Chapters examine neuroinformatics, genetic and neurobehavioral databases and data mining, also providing an analysis of natural genetic variation and principles and applications of forward (mutagenesis) and reverse genetics (gene targeting). In addition, the book discusses gene expression and its role in brain function and behavior, along with ethical issues in the use of animals in genetics testing. Written and edited by leading international experts, this book provides a clear presentation of the frontiers of basic research as well as translationally relevant techniques that are used by neurobehavioral geneticists.
This book discusses risk management, product pricing, capital management and Return on Equity comprehensively and seamlessly. Strategic planning, including the required quantitative methods, is an essential part of bank management and control. A thorough introduction to the advanced methods of risk management for Credit Risk, Counterparty Credit Risk, Market Risk, Operational Risk and Risk Aggregation is provided. In addition, directly applicable concepts and data such as macroeconomic scenarios for strategic planning and stress testing as well as detailed scenarios for Operational Risk and advanced concepts for Credit Risk are presented in straightforward language. The book highlights the implications and chances of the Basel III and Basel IV implementations (2022 onwards), especially in terms of capital management and Return on Equity. A wealth of essential background information from practice, international observations and comparisons, along with numerous illustrative examples, make this book a useful resource for established and future professionals in bank management, risk management, capital management, controlling and accounting.
This books presents some of the most recent and advanced statistical methods used to analyse environmental and climate data, and addresses the spatial and spatio-temporal dimensions of the phenomena studied, the multivariate complexity of the data, and the necessity of considering uncertainty sources and propagation. The topics covered include: detecting disease clusters, analysing harvest data, change point detection in ground-level ozone concentration, modelling atmospheric aerosol profiles, predicting wind speed, precipitation prediction and analysing spatial cylindrical data. The volume presents revised versions of selected contributions submitted at the joint TIES-GRASPA 2017 Conference on Climate and Environment, which was held at the University of Bergamo, Italy. As it is chiefly intended for researchers working at the forefront of statistical research in environmental applications, readers should be familiar with the basic methods for analysing spatial and spatio-temporal data.
This book covers methods of Mathematical Morphology to model and simulate random sets and functions (scalar and multivariate). The introduced models concern many physical situations in heterogeneous media, where a probabilistic approach is required, like fracture statistics of materials, scaling up of permeability in porous media, electron microscopy images (including multispectral images), rough surfaces, multi-component composites, biological tissues, textures for image coding and synthesis. The common feature of these random structures is their domain of definition in n dimensions, requiring more general models than standard Stochastic Processes.The main topics of the book cover an introduction to the theory of random sets, random space tessellations, Boolean random sets and functions, space-time random sets and functions (Dead Leaves, Sequential Alternate models, Reaction-Diffusion), prediction of effective properties of random media, and probabilistic fracture theories.
This book explores the concept of complementation in the adjectival domain of English grammar. Alternation between non-finite complements, especially to infinitives and gerundial complements, has been investigated intensively on the basis of large corpora in the last few years. With very few exceptions, however, such work has hitherto been based on univariate analysis methods. Using multivariate analysis, the authors present methodologically innovative case studies examining a large array of explanatory factors potentially impacting complement choice in cases of alternation. This approach yields more precise information on the impact of each factor on complement choice as well as on interactions between different explanatory factors. The book thus presents a methodologically new perspective on the study of the system of non-finite complementation in recent English and variation within that system, and will be relevant to academics and students with an interest in English grammar, predicate complementation, and statistical approaches to language.
For courses in introductory statistics. Classic, yet contemporary; theoretical, yet applied-McClave & Sincich's Statistics gives you the best of both worlds. This text offers a trusted, comprehensive introduction to statistics that emphasises inference and integrates real data throughout. The authors stress the development of statistical thinking, the assessment of credibility, and value of the inferences made from data. This edition is extensively revised with an eye on clearer, more concise language throughout the text and in the exercises. Ideal for one- or two-semester courses in introductory statistics, this text assumes a mathematical background of basic algebra. Flexibility is built in for instructors who teach a more advanced course, with optional footnotes about calculus and the underlying theory.
This richly illustrated book provides an easy-to-read introduction to the challenges of organizing and integrating modern data worlds, explaining the contribution of public statistics and the ISO standard SDMX (Statistical Data and Metadata Exchange). As such, it is a must for data experts as well those aspiring to become one. Today, exponentially growing data worlds are increasingly determining our professional and private lives. The rapid increase in the amount of globally available data, fueled by search engines and social networks but also by new technical possibilities such as Big Data, offers great opportunities. But whatever the undertaking - driving the block chain revolution or making smart phones even smarter - success will be determined by how well it is possible to integrate, i.e. to collect, link and evaluate, the required data. One crucial factor in this is the introduction of a cross-domain order system in combination with a standardization of the data structure. Using everyday examples, the authors show how the concepts of statistics provide the basis for the universal and standardized presentation of any kind of information. They also introduce the international statistics standard SDMX, describing the profound changes it has made possible and the related order system for the international statistics community.
This book provides practical applications of doubly classified models by using R syntax to generate the models. It also presents these models in symbolic tables so as to cater to those who are not mathematically inclined, while numerous examples throughout the book illustrate the concepts and their applications. For those who are not aware of this modeling approach, it serves as a good starting point to acquire a basic understanding of doubly classified models. It is also a valuable resource for academics, postgraduate students, undergraduates, data analysts and researchers who are interested in examining square contingency tables.
The first edition of Theory of Rank Tests (1967) has been the
precursor to a unified and theoretically motivated treatise of the
basic theory of tests based on ranks of the sample observations.
For more than 25 years, it helped raise a generation of
statisticians in cultivating their theoretical research in this
fertile area, as well as in using these tools in their application
oriented research. The present edition not only aims to revive this
classical text by updating the findings but also by incorporating
several other important areas which were either not properly
developed before 1965 or have gone through an evolutionary
development during the past 30 years. This edition therefore aims
to fulfill the needs of academic as well as professional
statisticians who want to pursue nonparametrics in their academic
projects, consultation, and applied research works.
|
![]() ![]() You may like...
Distributed and Parallel Systems - In…
Peter Kacsuk, Robert Lovas, …
Hardcover
R2,996
Discovery Miles 29 960
Formal and Adaptive Methods for…
Anatoliy Doroshenko, Olena Yatsenko
Hardcover
R5,784
Discovery Miles 57 840
|