![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
The purpose of this book is to thoroughly prepare the reader for applied research in clustering. Cluster analysis comprises a class of statistical techniques for classifying multivariate data into groups or clusters based on their similar features. Clustering is nowadays widely used in several domains of research, such as social sciences, psychology, and marketing, highlighting its multidisciplinary nature. This book provides an accessible and comprehensive introduction to clustering and offers practical guidelines for applying clustering tools by carefully chosen real-life datasets and extensive data analyses. The procedures addressed in this book include traditional hard clustering methods and up-to-date developments in soft clustering. Attention is paid to practical examples and applications through the open source statistical software R. Commented R code and output for conducting, step by step, complete cluster analyses are available. The book is intended for researchers interested in applying clustering methods. Basic notions on theoretical issues and on R are provided so that professionals as well as novices with little or no background in the subject will benefit from the book.
This book concentrates on linear regression, path analysis and logistic regressions, the most used statistical techniques for the test of causal relationships. Its emphasis is on the conceptions and applications of the techniques by using simple examples without requesting any mathematical knowledge. It shows multiple regression analysis accurately reconstructs the causal relationships between phenomena. So, it can be used to test the hypotheses about causal relationships between variables. It presents that potential effects of each independent variable on the dependent variable are not limited to direct and indirect effects. The path analysis shows each independent variable has a pure effect on the dependent variable. So, it can be shown the unique contribution of each independent variable to the variation of the dependent variable. It is an advanced statistical text for the graduate students in social and behavior sciences. It also serves as a reference for professionals and researchers.
This contributed volume applies spatial and space-time econometric methods to spatial interaction modeling. The first part of the book addresses general cutting-edge methodological questions in spatial econometric interaction modeling, which concern aspects such as coefficient interpretation, constrained estimation, and scale effects. The second part deals with technical solutions to particular estimation issues, such as intraregional flows, Bayesian PPML and VAR estimation. The final part presents a number of empirical applications, ranging from interregional tourism competition and domestic trade to space-time migration modeling and residential relocation.
This is a comprehensive survey on the research on the parabolic Anderson model - the heat equation with random potential or the random walk in random potential - of the years 1990 - 2015. The investigation of this model requires a combination of tools from probability (large deviations, extreme-value theory, e.g.) and analysis (spectral theory for the Laplace operator with potential, variational analysis, e.g.). We explain the background, the applications, the questions and the connections with other models and formulate the most relevant results on the long-time behavior of the solution, like quenched and annealed asymptotics for the total mass, intermittency, confinement and concentration properties and mass flow. Furthermore, we explain the most successful proof methods and give a list of open research problems. Proofs are not detailed, but concisely outlined and commented; the formulations of some theorems are slightly simplified for better comprehension.
This unique book develops the application of experimental statistical designs and analysis to discrete-event simulation modeling. It takes a practical perspective and orients the reader with examples of the role of simulation in modeling a system. The stages and steps for applying simulation are discussed by focusing on the important role of statistics. Examples are given about how to design an experiment using techniques such as classical designs, group screening, polynomial decomposition, and Taguchi designs. Using the statistical techniques discussed, a sound simulation model can be built and adequately tested before implementation. The book also shows how simulation results can be generalized by discussing in full the growing emphasis on simulation metamodeling. Examples of this approach are presented to show that reliable and simple models could be easily obtained. Furthermore, such models are applied within a decision framework to optimize the system of interest. This expands the power of simulation from being purely descriptive of the system to being a prescriptive model. The reader is exposed to potential problems and how such problems may be harnessed. Although the book discusses statistical techniques, it is written so as to be comprehensible to anyone with a basic background in statistics. The book is a good resource for consultants and simulation practitioners; it can also be used as a textbook for classes in simulation.
This book presents modern Bayesian analysis in a format that is accessible to researchers in the fields of ecology, wildlife biology, and natural resource management. Bayesian analysis has undergone a remarkable transformation since the early 1990s. Widespread adoption of Markov chain Monte Carlo techniques has made the Bayesian paradigm the viable alternative to classical statistical procedures for scientific inference. The Bayesian approach has a number of desirable qualities, three chief ones being: i) the mathematical procedure is always the same, allowing the analyst to concentrate on the scientific aspects of the problem; ii) historical information is readily used, when appropriate; and iii) hierarchical models are readily accommodated. This monograph contains numerous worked examples and the requisite computer programs. The latter are easily modified to meet new situations. A primer on probability distributions is also included because these form the basis of Bayesian inference. Researchers and graduate students in Ecology and Natural Resource Management will find this book a valuable reference.
This book provides a concise introduction to the mathematical foundations of time series analysis, with an emphasis on mathematical clarity. The text is reduced to the essential logical core, mostly using the symbolic language of mathematics, thus enabling readers to very quickly grasp the essential reasoning behind time series analysis. It appeals to anybody wanting to understand time series in a precise, mathematical manner. It is suitable for graduate courses in time series analysis but is equally useful as a reference work for students and researchers alike.
6 Preliminaries.- 6.1 The operator of singular integration.- 6.2 The space Lp(?, ?).- 6.3 Singular integral operators.- 6.4 The spaces $$L_{p}^{ + }(\Gamma, \rho ), L_{p}^{ - }(\Gamma, \rho ) and \mathop{{L_{p}^{ - }}}\limits^{^\circ } (\Gamma, \rho )$$.- 6.5 Factorization.- 6.6 One-sided invertibility of singular integral operators.- 6.7 Fredholm operators.- 6.8 The local principle for singular integral operators.- 6.9 The interpolation theorem.- 7 General theorems.- 7.1 Change of the curve.- 7.2 The quotient norm of singular integral operators.- 7.3 The principle of separation of singularities.- 7.4 A necessary condition.- 7.5 Theorems on kernel and cokernel of singular integral operators.- 7.6 Two theorems on connections between singular integral operators.- 7.7 Index cancellation and approximative inversion of singular integral operators.- 7.8 Exercises.- Comments and references.- 8 The generalized factorization of bounded measurable functions and its applications.- 8.1 Sketch of the problem.- 8.2 Functions admitting a generalized factorization with respect to a curve in Lp(?, ?).- 8.3 Factorization in the spaces Lp(?, ?).- 8.4 Application of the factorization to the inversion of singular integral operators.- 8.5 Exercises.- Comments and references.- 9 Singular integral operators with piecewise continuous coefficients and their applications.- 9.1 Non-singular functions and their index.- 9.2 Criteria for the generalized factorizability of power functions.- 9.3 The inversion of singular integral operators on a closed curve.- 9.4 Composed curves.- 9.5 Singular integral operators with continuous coefficients on a composed curve.- 9.6 The case of the real axis.- 9.7 Another method of inversion.- 9.8 Singular integral operators with regel functions coefficients.- 9.9 Estimates for the norms of the operators P?, Q? and S?.- 9.10 Singular operators on spaces H?o(?, ?).- 9.11 Singular operators on symmetric spaces.- 9.12 Fredholm conditions in the case of arbitrary weights.- 9.13 Technical lemmas.- 9.14 Toeplitz and paired operators with piecewise continuous coefficients on the spaces lp and ?p.- 9.15 Some applications.- 9.16 Exercises.- Comments and references.- 10 Singular integral operators on non-simple curves.- 10.1 Technical lemmas.- 10.2 A preliminary theorem.- 10.3 The main theorem.- 10.4 Exercises.- Comments and references.- 11 Singular integral operators with coefficients having discontinuities of almost periodic type.- 11.1 Almost periodic functions and their factorization.- 11.2 Lemmas on functions with discontinuities of almost periodic type.- 11.3 The main theorem.- 11.4 Operators with continuous coefficients - the degenerate case.- 11.5 Exercises.- Comments and references.- 12 Singular integral operators with bounded measurable coefficients.- 12.1 Singular operators with measurable coefficients in the space L2(?).- 12.2 Necessary conditions in the space L2(?).- 12.3 Lemmas.- 12.4 Singular operators with coefficients in ?p(?). Sufficient conditions.- 12.5 The Helson-Szegoe theorem and its generalization.- 12.6 On the necessity of the condition a ? Sp.- 12.7 Extension of the class of coefficients.- 12.8 Exercises.- Comments and references.- 13 Exact constants in theorems on the boundedness of singular operators.- 13.1 Norm and quotient norm of the operator of singular integration.- 13.2 A second proof of Theorem 4.1 of Chapter 12.- 13.3 Norm and quotient norm of the operator S? on weighted spaces.- 13.4 Conditions for Fredholmness in spaces Lp(?, ?).- 13.5 Norms and quotient norm of the operator aI + bS?.- 13.6 Exercises.- Comments and references.- References.
The contributions in this book survey results on combinations of probabilistic and various other classical, temporal and justification logical systems. Formal languages of these logics are extended with probabilistic operators. The aim is to provide a systematic overview and an accessible presentation of mathematical techniques used to obtain results on formalization, completeness, compactness and decidability. The book will be of value to researchers in logic and it can be used as a supplementary text in graduate courses on non-classical logics.
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvagar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in "big data" situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on future research directions, the contributions will benefit graduate students and researchers in computational biology, statistics and the machine learning community.
This book collects research papers on the philosophical foundations of probability, causality, spacetime and quantum theory. The papers are related to talks presented in six subsequent workshops organized by The Budapest-Krakow Research Group on Probability, Causality and Determinism. Coverage consists of three parts. Part I focuses on the notion of probability from a general philosophical and formal epistemological perspective. Part II applies probabilistic considerations to address causal questions in the foundations of quantum mechanics. Part III investigates the question of indeterminism in spacetime theories. It also explores some related questions, such as decidability and observation. The contributing authors are all philosophers of science with a strong background in mathematics or physics. They believe that paying attention to the finer formal details often helps avoiding pitfalls that exacerbate the philosophical problems that are in the center of focus of contemporary research. The papers presented here help make explicit the mathematical-structural assumptions that underlie key philosophical argumentations. This formally rigorous and conceptually precise approach will appeal to researchers and philosophers as well as mathematicians and statisticians.
This book discusses the state-of-the-art and open problems in computational finance. It presents a collection of research outcomes and reviews of the work from the STRIKE project, an FP7 Marie Curie Initial Training Network (ITN) project in which academic partners trained early-stage researchers in close cooperation with a broader range of associated partners, including from the private sector. The aim of the project was to arrive at a deeper understanding of complex (mostly nonlinear) financial models and to develop effective and robust numerical schemes for solving linear and nonlinear problems arising from the mathematical theory of pricing financial derivatives and related financial products. This was accomplished by means of financial modelling, mathematical analysis and numerical simulations, optimal control techniques and validation of models. In recent years the computational complexity of mathematical models employed in financial mathematics has witnessed tremendous growth. Advanced numerical techniques are now essential to the majority of present-day applications in the financial industry. Special attention is devoted to a uniform methodology for both testing the latest achievements and simultaneously educating young PhD students. Most of the mathematical codes are linked into a novel computational finance toolbox, which is provided in MATLAB and PYTHON with an open access license. The book offers a valuable guide for researchers in computational finance and related areas, e.g. energy markets, with an interest in industrial mathematics.
This book provides a general discussion beneficial to librarians and library school students, and demonstrates the steps of the research process, decisions made in the selection of a statistical technique, how to program a computer to perform number crunching, how to compute those statistical techniques appearing most frequently in the literature of library and information science, and examples from the literature of the uses of different statistical techniques. The book accomplishes the following objectives: to provide an overview of the research process and to show where statistics fit in; to identify journals in library and information science most likely to publish research articles; to identify reference tools that provide access to the research literature; to show how microcomputers can be programmed to engage in number crunching; to introduce basic statistical concepts and terminology; to present basic statistical procedures that appear most frequently in the literature of library and information science and that have application to library decision making; to discuss library decision support systems and show the types of statistical techniques they can perform; and to summarize the major decisions that researchers must address in deciding which statistical techniques to employ.
Probabilistic models have much to offer to philosophy. We continually receive information from a variety of sources: from our senses, from witnesses, from scientific instruments. When considering whether we should believe this information, we assess whether the sources are independent, how reliable they are, and how plausible and coherent the information is. Bovens and Hartmann provide a systematic Bayesian account of these features of reasoning. Simple Bayesian networks allow us to model alternative assumptions about the nature of the information sources. Measurement of the coherence of information is a controversial matter: arguably, the more coherent a set of information is, the more confident we may be that its content is true, other things being equal. The authors offer a new treatment of coherence which respects this claim and shows its relevance to scientific theory choice. Bovens and Hartmann apply this methodology to a wide range of much-discussed issues regarding evidence, testimony, scientific theories and voting. "Bayesian Epistemology" is for anyone working on probabilistic methods in philosophy, and has broad implications for many other disciplines.
This book presents Markov and quantum processes as two sides of a coin called generated stochastic processes. It deals with quantum processes as reversible stochastic processes generated by one-step unitary operators, while Markov processes are irreversible stochastic processes generated by one-step stochastic operators. The characteristic feature of quantum processes are oscillations, interference, lots of stationary states in bounded systems and possible asymptotic stationary scattering states in open systems, while the characteristic feature of Markov processes are relaxations to a single stationary state. Quantum processes apply to systems where all variables, that control reversibility, are taken as relevant variables, while Markov processes emerge when some of those variables cannot be followed and are thus irrelevant for the dynamic description. Their absence renders the dynamic irreversible. A further aim is to demonstrate that almost any subdiscipline of theoretical physics can conceptually be put into the context of generated stochastic processes. Classical mechanics and classical field theory are deterministic processes which emerge when fluctuations in relevant variables are negligible. Quantum mechanics and quantum field theory consider genuine quantum processes. Equilibrium and non-equilibrium statistics apply to the regime where relaxing Markov processes emerge from quantum processes by omission of a large number of uncontrollable variables. Systems with many variables often self-organize in such a way that only a few slow variables can serve as relevant variables. Symmetries and topological classes are essential in identifying such relevant variables. The third aim of this book is to provide conceptually general methods of solutions which can serve as starting points to find relevant variables as to apply best-practice approximation methods. Such methods are available through generating functionals. The potential reader is a graduate student who has heard already a course in quantum theory and equilibrium statistical physics including the mathematics of spectral analysis (eigenvalues, eigenvectors, Fourier and Laplace transformation). The reader should be open for a unifying look on several topics.
This book was written to serve as a graduate-level textbook for special topics classes in mathematics, statistics, and economics, to introduce these topics to other researchers, and for use in short courses. It is an introduction to the theory of majorization and related notions, and contains detailed material on economic applications of majorization and the Lorenz order, investigating the theoretical aspects of these two interrelated orderings. Revising and expanding on an earlier monograph, Majorization and the Lorenz Order: A Brief Introduction, the authors provide a straightforward development and explanation of majorization concepts, addressing historical development of the topics, and providing up-to-date coverage of families of Lorenz curves. The exposition of multivariate Lorenz orderings sets it apart from existing treatments of these topics. Mathematicians, theoretical statisticians, economists, and other social scientists who already recognize the utility of the Lorenz order in income inequality contexts and arenas will find the book useful for its sound development of relevant concepts rigorously linked to both the majorization literature and the even more extensive body of research on economic applications. Barry C. Arnold, PhD, is Distinguished Professor in the Statistics Department at the University of California, Riverside. He is a Fellow of the American Statistical Society, the American Association for the Advancement of Science, and the Institute of Mathematical Statistics, and is an elected member of the International Statistical Institute. He is the author of more than two hundred publications and eight books. Jose Maria Sarabia, PhD, is Professor of Statistics and Quantitative Methods in Business and Economics in the Department of Economics at the University of Cantabria, Spain. He is author of more than one hundred and fifty publications and ten books and is an associate editor of several journals including TEST, Communications in Statistics, and Journal of Statistical Distributions and Applications.
Recent applications of evolutionary game theory in the merging fields of the mathematical and social sciences are brilliantly portrayed in this book, which highlights social physics and shows how the approach can help to quantitatively model complex human-environmental-social systems. First, readers are introduced to the fundamentals of evolutionary game theory. The two-player, two-strategy game, or the 2 x 2 game, is presented as an archetype to help understand the difficulty of cooperating for survival against defection in common social contexts. Subsequently, the book explains the theoretical background of the multi-player, two-strategy game, which may be more widely applicable than the 2 x 2 game for social dilemmas. The latest applications of 2 x 2 games are also discussed to explore how integrated reciprocity mechanisms can solve social dilemmas. In turn, the book describes two practical areas in which evolutionary game theory has been applied. The first concerns traffic flow analysis. In conventional interpretations, traffic flow can be understood by means of fluid dynamics, in which the flow of vehicles is evaluated as a continuum body. Such a simple idea, however, does not work well in reality, particularly if a driver's decision-making process is considered. Various dilemmas involve complex structures that depend primarily on traffic density, a revelation that should help establish a practical solution for reducing traffic congestion. Second, the book provides keen insights into how powerful evolutionary game theory can be in the context of epidemiology. Both approaches, quasi-analytical and multi-agent simulation, can clarify how an infectious disease such as seasonal influenza spreads across a complex social network, which is significantly affected by the public attitude toward vaccination. A methodology is proposed for the optimum design of a public vaccination policy incorporating subsidies to efficiently increase vaccination coverage while minimizing the social cost.
This thesis develops a systematic, data-based dynamic modeling framework for industrial processes in keeping with the slowness principle. Using said framework as a point of departure, it then proposes novel strategies for dealing with control monitoring and quality prediction problems in industrial production contexts. The thesis reveals the slowly varying nature of industrial production processes under feedback control, and integrates it with process data analytics to offer powerful prior knowledge that gives rise to statistical methods tailored to industrial data. It addresses several issues of immediate interest in industrial practice, including process monitoring, control performance assessment and diagnosis, monitoring system design, and product quality prediction. In particular, it proposes a holistic and pragmatic design framework for industrial monitoring systems, which delivers effective elimination of false alarms, as well as intelligent self-running by fully utilizing the information underlying the data. One of the strengths of this thesis is its integration of insights from statistics, machine learning, control theory and engineering to provide a new scheme for industrial process modeling in the era of big data.
This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the book describes the derivation of theoretical bounds in minimax decentralized hypothesis testing, which have not yet been known. As a timely report on the state-of-the-art in robust hypothesis testing, this book is mainly intended for postgraduates and researchers in the field of electrical and electronic engineering, statistics and applied probability. Moreover, it may be of interest for students and researchers working in the field of classification, pattern recognition and cognitive radio.
This book presents state-of-the-art probabilistic methods for the reliability analysis and design of engineering products and processes. It seeks to facilitate practical application of probabilistic analysis and design by providing an authoritative, in-depth, and practical description of what probabilistic analysis and design is and how it can be implemented. The text is packed with many practical engineering examples (e.g., electric power transmission systems, aircraft power generating systems, and mechanical transmission systems) and exercise problems. It is an up-to-date, fully illustrated reference suitable for both undergraduate and graduate engineering students, researchers, and professional engineers who are interested in exploring the fundamentals, implementation, and applications of probabilistic analysis and design methods.
Markov process theory is basically an extension of ordinary
calculus to accommodate functions whos time evolutions are not
entirely deterministic. It is a subject that is becoming
increasingly important for many fields of science. This book
develops the single-variable theory of both continuous and jump
Markov processes in a way that should appeal especially to
physicists and chemists at the senior and graduate level.
This book shows how to develop efficient quantitative methods to characterize neural data and extra information that reveals underlying dynamics and neurophysiological mechanisms. Written by active experts in the field, it contains an exchange of innovative ideas among researchers at both computational and experimental ends, as well as those at the interface. Authors discuss research challenges and new directions in emerging areas with two goals in mind: to collect recent advances in statistics, signal processing, modeling, and control methods in neuroscience; and to welcome and foster innovative or cross-disciplinary ideas along this line of research and discuss important research issues in neural data analysis. Making use of both tutorial and review materials, this book is written for neural, electrical, and biomedical engineers; computational neuroscientists; statisticians; computer scientists; and clinical engineers.
This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).
This book examines statistical techniques that are critically important to Chemistry, Manufacturing, and Control (CMC) activities. Statistical methods are presented with a focus on applications unique to the CMC in the pharmaceutical industry. The target audience consists of statisticians and other scientists who are responsible for performing statistical analyses within a CMC environment. Basic statistical concepts are addressed in Chapter 2 followed by applications to specific topics related to development and manufacturing. The mathematical level assumes an elementary understanding of statistical methods. The ability to use Excel or statistical packages such as Minitab, JMP, SAS, or R will provide more value to the reader. The motivation for this book came from an American Association of Pharmaceutical Scientists (AAPS) short course on statistical methods applied to CMC applications presented by four of the authors. One of the course participants asked us for a good reference book, and the only book recommended was written over 20 years ago by Chow and Liu (1995). We agreed that a more recent book would serve a need in our industry. Since we began this project, an edited book has been published on the same topic by Zhang (2016). The chapters in Zhang discuss statistical methods for CMC as well as drug discovery and nonclinical development. We believe our book complements Zhang by providing more detailed statistical analyses and examples. |
![]() ![]() You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,186
Discovery Miles 41 860
Oracle Business Intelligence with…
Rosendo Abellera, Lakshman Bulusu
Paperback
R1,253
Discovery Miles 12 530
Stochastic Optimization for Large-scale…
Vinod Kumar Chauhan
Hardcover
R4,921
Discovery Miles 49 210
Advances in Inorganic Chemistry: Recent…
Rudi van Eldik, Colin D. Hubbard
Hardcover
Geometric Level Set Methods in Imaging…
Stanley Osher, Nikos Paragios
Hardcover
R2,990
Discovery Miles 29 900
Metals, Microbes, and Minerals - The…
Peter Kroneck, Martha Sosa Torres
Hardcover
R7,929
Discovery Miles 79 290
|