Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book presents new efficient methods for optimization in realistic large-scale, multi-agent systems. These methods do not require the agents to have the full information about the system, but instead allow them to make their local decisions based only on the local information, possibly obtained during communication with their local neighbors. The book, primarily aimed at researchers in optimization and control, considers three different information settings in multi-agent systems: oracle-based, communication-based, and payoff-based. For each of these information types, an efficient optimization algorithm is developed, which leads the system to an optimal state. The optimization problems are set without such restrictive assumptions as convexity of the objective functions, complicated communication topologies, closed-form expressions for costs and utilities, and finiteness of the system's state space.
This richly illustrated book provides an overview of the design and analysis of experiments with a focus on non-clinical experiments in the life sciences, including animal research. It covers the most common aspects of experimental design such as handling multiple treatment factors and improving precision. In addition, it addresses experiments with large numbers of treatment factors and response surface methods for optimizing experimental conditions or biotechnological yields. The book emphasizes the estimation of effect sizes and the principled use of statistical arguments in the broader scientific context. It gradually transitions from classical analysis of variance to modern linear mixed models, and provides detailed information on power analysis and sample size determination, including 'portable power' formulas for making quick approximate calculations. In turn, detailed discussions of several real-life examples illustrate the complexities and aberrations that can arise in practice. Chiefly intended for students, teachers and researchers in the fields of experimental biology and biomedicine, the book is largely self-contained and starts with the necessary background on basic statistical concepts. The underlying ideas and necessary mathematics are gradually introduced in increasingly complex variants of a single example. Hasse diagrams serve as a powerful method for visualizing and comparing experimental designs and deriving appropriate models for their analysis. Manual calculations are provided for early examples, allowing the reader to follow the analyses in detail. More complex calculations rely on the statistical software R, but are easily transferable to other software. Though there are few prerequisites for effectively using the book, previous exposure to basic statistical ideas and the software R would be advisable.
This book provides a general discussion beneficial to librarians and library school students, and demonstrates the steps of the research process, decisions made in the selection of a statistical technique, how to program a computer to perform number crunching, how to compute those statistical techniques appearing most frequently in the literature of library and information science, and examples from the literature of the uses of different statistical techniques. The book accomplishes the following objectives: to provide an overview of the research process and to show where statistics fit in; to identify journals in library and information science most likely to publish research articles; to identify reference tools that provide access to the research literature; to show how microcomputers can be programmed to engage in number crunching; to introduce basic statistical concepts and terminology; to present basic statistical procedures that appear most frequently in the literature of library and information science and that have application to library decision making; to discuss library decision support systems and show the types of statistical techniques they can perform; and to summarize the major decisions that researchers must address in deciding which statistical techniques to employ.
This book presents a systematic and comprehensive treatment of various prior processes that have been developed over the past four decades for dealing with Bayesian approach to solving selected nonparametric inference problems. This revised edition has been substantially expanded to reflect the current interest in this area. After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process. It subsequently discusses various neutral to right type processes, including gamma and extended gamma, beta and beta-Stacy processes, and then describes the Chinese Restaurant, Indian Buffet and infinite gamma-Poisson processes, which prove to be very useful in areas such as machine learning, information retrieval and featural modeling. Tailfree and Polya tree and their extensions form a separate chapter, while the last two chapters present the Bayesian solutions to certain estimation problems pertaining to the distribution function and its functional based on complete data as well as right censored data. Because of the conjugacy property of some of these processes, most solutions are presented in closed form. However, the current interest in modeling and treating large-scale and complex data also poses a problem - the posterior distribution, which is essential to Bayesian analysis, is invariably not in a closed form, making it necessary to resort to simulation. Accordingly, the book also introduces several computational procedures, such as the Gibbs sampler, Blocked Gibbs sampler and slice sampling, highlighting essential steps of algorithms while discussing specific models. In addition, it features crucial steps of proofs and derivations, explains the relationships between different processes and provides further clarifications to promote a deeper understanding. Lastly, it includes a comprehensive list of references, equipping readers to explore further on their own.
This book introduces physics students to concepts and methods of finance. Despite being perceived as quite distant from physics, finance shares a number of common methods and ideas, usually related to noise and uncertainties. Juxtaposing the key methods to applications in both physics and finance articulates both differences and common features, this gives students a deeper understanding of the underlying ideas. Moreover, they acquire a number of useful mathematical and computational tools, such as stochastic differential equations, path integrals, Monte-Carlo methods, and basic cryptology. Each chapter ends with a set of carefully designed exercises enabling readers to test their comprehension.
This book is designed as a gentle introduction to the fascinating field of choice modeling and its practical implementation using the R language. Discrete choice analysis is a family of methods useful to study individual decision-making. With strong theoretical foundations in consumer behavior, discrete choice models are used in the analysis of health policy, transportation systems, marketing, economics, public policy, political science, urban planning, and criminology, to mention just a few fields of application. The book does not assume prior knowledge of discrete choice analysis or R, but instead strives to introduce both in an intuitive way, starting from simple concepts and progressing to more sophisticated ideas. Loaded with a wealth of examples and code, the book covers the fundamentals of data and analysis in a progressive way. Readers begin with simple data operations and the underlying theory of choice analysis and conclude by working with sophisticated models including latent class logit models, mixed logit models, and ordinal logit models with taste heterogeneity. Data visualization is emphasized to explore both the input data as well as the results of models. This book should be of interest to graduate students, faculty, and researchers conducting empirical work using individual level choice data who are approaching the field of discrete choice analysis for the first time. In addition, it should interest more advanced modelers wishing to learn about the potential of R for discrete choice analysis. By embedding the treatment of choice modeling within the R ecosystem, readers benefit from learning about the larger R family of packages for data exploration, analysis, and visualization.
The book provides a comprehensive introduction and a novel mathematical foundation of the field of information geometry with complete proofs and detailed background material on measure theory, Riemannian geometry and Banach space theory. Parametrised measure models are defined as fundamental geometric objects, which can be both finite or infinite dimensional. Based on these models, canonical tensor fields are introduced and further studied, including the Fisher metric and the Amari-Chentsov tensor, and embeddings of statistical manifolds are investigated. This novel foundation then leads to application highlights, such as generalizations and extensions of the classical uniqueness result of Chentsov or the Cramer-Rao inequality. Additionally, several new application fields of information geometry are highlighted, for instance hierarchical and graphical models, complexity theory, population genetics, or Markov Chain Monte Carlo. The book will be of interest to mathematicians who are interested in geometry, information theory, or the foundations of statistics, to statisticians as well as to scientists interested in the mathematical foundations of complex systems.
This volume presents some of the most influential papers published by Rabi N. Bhattacharya, along with commentaries from international experts, demonstrating his knowledge, insight, and influence in the field of probability and its applications. For more than three decades, Bhattacharya has made significant contributions in areas ranging from theoretical statistics via analytical probability theory, Markov processes, and random dynamics to applied topics in statistics, economics, and geophysics. Selected reprints of Bhattacharya's papers are divided into three sections: Modes of Approximation, Large Times for Markov Processes, and Stochastic Foundations in Applied Sciences. The accompanying articles by the contributing authors not only help to position his work in the context of other achievements, but also provide a unique assessment of the state of their individual fields, both historically and for the next generation of researchers. Rabi N. Bhattacharya: Selected Papers will be a valuable resource for young researchers entering the diverse areas of study to which Bhattacharya has contributed. Established researchers will also appreciate this work as an account of both past and present developments and challenges for the future.
This proceedings volume presents new methods and applications in applied economics with special interest in advanced cross-section data estimation methodology. Featuring select contributions from the 2019 International Conference on Applied Economics (ICOAE 2019) held in Milan, Italy, this book explores areas such as applied macroeconomics, applied microeconomics, applied financial economics, applied international economics, applied agricultural economics, applied marketing and applied managerial economics. International Conference on Applied Economics (ICOAE) is an annual conference that started in 2008, designed to bring together economists from different fields of applied economic research, in order to share methods and ideas. Applied economics is a rapidly growing field of economics that combines economic theory with econometrics, to analyze economic problems of the real world, usually with economic policy interest. In addition, there is growing interest in the field of applied economics for cross-section data estimation methods, tests and techniques. This volume makes a contribution in the field of applied economic research by presenting the most current research. Featuring country specific studies, this book is of interest to academics, students, researchers, practitioners, and policy makers in applied economics, econometrics and economic policy.
This proceedings volume features top contributions in modern statistical methods from Statistics 2021 Canada, the 6th Annual Canadian Conference in Applied Statistics, held virtually on July 15-18, 2021. Papers are contributed from established and emerging scholars, covering cutting-edge and contemporary innovative techniques in statistics and data science. Major areas of contribution include Bayesian statistics; computational statistics; data science; semi-parametric regression; and stochastic methods in biology, crop science, ecology and engineering. It will be a valuable edited collection for graduate students, researchers, and practitioners in a wide array of applied statistical and data science methods.
This volume presents recent advances in the field of matrix analysis based on contributions at the MAT-TRIAD 2015 conference. Topics covered include interval linear algebra and computational complexity, Birkhoff polynomial basis, tensors, graphs, linear pencils, K-theory and statistic inference, showing the ubiquity of matrices in different mathematical areas. With a particular focus on matrix and operator theory, statistical models and computation, the International Conference on Matrix Analysis and its Applications 2015, held in Coimbra, Portugal, was the sixth in a series of conferences. Applied and Computational Matrix Analysis will appeal to graduate students and researchers in theoretical and applied mathematics, physics and engineering who are seeking an overview of recent problems and methods in matrix analysis.
This book provides engineers with focused treatment of the mathematics needed to understand probability, random variables, and stochastic processes, which are essential mathematical disciplines used in communications engineering. The author explains the basic concepts of these topics as plainly as possible so that people with no in-depth knowledge of these mathematical topics can better appreciate their applications in real problems. Applications examples are drawn from various areas of communications. If a reader is interested in understanding probability and stochastic processes that are specifically important for communications networks and systems, this book serves his/her need.
This book is an introduction to the mathematical analysis of probability theory and provides some understanding of how probability is used to model random phenomena of uncertainty, specifically in the context of finance theory and applications. The integrated coverage of both basic probability theory and finance theory makes this book useful reading for advanced undergraduate students or for first-year postgraduate students in a quantitative finance course.The book provides easy and quick access to the field of theoretical finance by linking the study of applied probability and its applications to finance theory all in one place. The coverage is carefully selected to include most of the key ideas in finance in the last 50 years.The book will also serve as a handy guide for applied mathematicians and probabilists to easily access the important topics in finance theory and economics. In addition, it will also be a handy book for financial economists to learn some of the more mathematical and rigorous techniques so their understanding of theory is more rigorous. It is a must read for advanced undergraduate and graduate students who wish to work in the quantitative finance area.
This book presents the principles and methods for the practical analysis and prediction of economic and financial time series. It covers decomposition methods, autocorrelation methods for univariate time series, volatility and duration modeling for financial time series, and multivariate time series methods, such as cointegration and recursive state space modeling. It also includes numerous practical examples to demonstrate the theory using real-world data, as well as exercises at the end of each chapter to aid understanding. This book serves as a reference text for researchers, students and practitioners interested in time series, and can also be used for university courses on econometrics or computational finance.
This fully updated new edition of a uniquely accessible textbook/reference provides a general introduction to probabilistic graphical models (PGMs) from an engineering perspective. It features new material on partially observable Markov decision processes, causal graphical models, causal discovery and deep learning, as well as an even greater number of exercises; it also incorporates a software library for several graphical models in Python. The book covers the fundamentals for each of the main classes of PGMs, including representation, inference and learning principles, and reviews real-world applications for each type of model. These applications are drawn from a broad range of disciplines, highlighting the many uses of Bayesian classifiers, hidden Markov models, Bayesian networks, dynamic and temporal Bayesian networks, Markov random fields, influence diagrams, and Markov decision processes. Topics and features: Presents a unified framework encompassing all of the main classes of PGMs Explores the fundamental aspects of representation, inference and learning for each technique Examines new material on partially observable Markov decision processes, and graphical models Includes a new chapter introducing deep neural networks and their relation with probabilistic graphical models Covers multidimensional Bayesian classifiers, relational graphical models, and causal models Provides substantial chapter-ending exercises, suggestions for further reading, and ideas for research or programming projects Describes classifiers such as Gaussian Naive Bayes, Circular Chain Classifiers, and Hierarchical Classifiers with Bayesian Networks Outlines the practical application of the different techniques Suggests possible course outlines for instructors This classroom-tested work is suitable as a textbook for an advanced undergraduate or a graduate course in probabilistic graphical models for students of computer science, engineering, and physics. Professionals wishing to apply probabilistic graphical models in their own field, or interested in the basis of these techniques, will also find the book to be an invaluable reference. Dr. Luis Enrique Sucar is a Senior Research Scientist at the National Institute for Astrophysics, Optics and Electronics (INAOE), Puebla, Mexico. He received the National Science Prize en 2016.
This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations - the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability. The monograph should be of particular interest to engineers and scientists in general who study the phenomenon of statistical stability and use statistical methods for high-precision measurements, prediction, and signal processing over long observation intervals.
This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and government organizations.
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies. Contents: Part I - Estimation in regression models with errors in covariates Measurement error models Linear models with classical error Polynomial regression with known variance of classical error Nonlinear and generalized linear models Part II Radiation risk estimation under uncertainty in exposure doses Overview of risk models realized in program package EPICURE Estimation of radiation risk under classical or Berkson multiplicative error in exposure doses Radiation risk estimation for persons exposed by radioiodine as a result of the Chornobyl accident Elements of estimating equations theory Consistency of efficient methods Efficient SIMEX method as a combination of the SIMEX method and the corrected score method Application of regression calibration in the model with additive error in exposure doses
This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering. Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanatory frameworks, but become powerful, quantitative data-analytical tools in themselves that enable researchers to look beyond the data surface and unravel underlying mechanisms. Interactive examples of most methods are provided through a package of MatLab routines, encouraging a playful approach to the subject, and providing readers with a better feel for the practical aspects of the methods covered. "Computational neuroscience is essential for integrating and providing a basis for understanding the myriads of remarkable laboratory data on nervous system functions. Daniel Durstewitz has excellently covered the breadth of computational neuroscience from statistical interpretations of data to biophysically based modeling of the neurobiological sources of those data. His presentation is clear, pedagogically sound, and readily useable by experts and beginners alike. It is a pleasure to recommend this very well crafted discussion to experimental neuroscientists as well as mathematically well versed Physicists. The book acts as a window to the issues, to the questions, and to the tools for finding the answers to interesting inquiries about brains and how they function." Henry D. I. Abarbanel Physics and Scripps Institution of Oceanography, University of California, San Diego "This book delivers a clear and thorough introduction to sophisticated analysis approaches useful in computational neuroscience. The models described and the examples provided will help readers develop critical intuitions into what the methods reveal about data. The overall approach of the book reflects the extensive experience Prof. Durstewitz has developed as a leading practitioner of computational neuroscience. " Bruno B. Averbeck
This monograph discusses recent advances in ergodic theory and dynamical systems. As a mixture of survey papers of active research areas and original research papers, this volume attracts young and senior researchers alike. Contents: Duality of the almost periodic and proximal relations Limit directions of a vector cocycle, remarks and examples Optimal norm approximation in ergodic theory The iterated Prisoner's Dilemma: good strategies and their dynamics Lyapunov exponents for conservative twisting dynamics: a survey Takens' embedding theorem with a continuous observable
A comprehensive guide to everything scientists need to know about data management, this book is essential for researchers who need to learn how to organize, document and take care of their own data. Researchers in all disciplines are faced with the challenge of managing the growing amounts of digital data that are the foundation of their research. Kristin Briney offers practical advice and clearly explains policies and principles, in an accessible and in-depth text that will allow researchers to understand and achieve the goal of better research data management. Data Management for Researchers includes sections on: * The data problem - an introduction to the growing importance and challenges of using digital data in research. Covers both the inherent problems with managing digital information, as well as how the research landscape is changing to give more value to research datasets and code. * The data lifecycle - a framework for data's place within the research process and how data's role is changing. Greater emphasis on data sharing and data reuse will not only change the way we conduct research but also how we manage research data. * Planning for data management - covers the many aspects of data management and how to put them together in a data management plan. This section also includes sample data management plans. * Documenting your data - an often overlooked part of the data management process, but one that is critical to good management; data without documentation are frequently unusable. * Organizing your data - explains how to keep your data in order using organizational systems and file naming conventions. This section also covers using a database to organize and analyze content. * Improving data analysis - covers managing information through the analysis process. This section starts by comparing the management of raw and analyzed data and then describes ways to make analysis easier, such as spreadsheet best practices. It also examines practices for research code, including version control systems. * Managing secure and private data - many researchers are dealing with data that require extra security. This section outlines what data falls into this category and some of the policies that apply, before addressing the best practices for keeping data secure. * Short-term storage - deals with the practical matters of storage and backup and covers the many options available. This section also goes through the best practices to insure that data are not lost. * Preserving and archiving your data - digital data can have a long life if properly cared for. This section covers managing data in the long term including choosing good file formats and media, as well as determining who will manage the data after the end of the project. * Sharing/publishing your data - addresses how to make data sharing across research groups easier, as well as how and why to publicly share data. This section covers intellectual property and licenses for datasets, before ending with the altmetrics that measure the impact of publicly shared data. * Reusing data - as more data are shared, it becomes possible to use outside data in your research. This chapter discusses strategies for finding datasets and lays out how to cite data once you have found it. This book is designed for active scientific researchers but it is useful for anyone who wants to get more from their data: academics, educators, professionals or anyone who teaches data management, sharing and preservation. "An excellent practical treatise on the art and practice of data management, this book is essential to any researcher, regardless of subject or discipline." -Robert Buntrock, Chemical Information Bulletin
This research monograph brings together, for the first time, the varied literature on Yosida approximations of stochastic differential equations (SDEs) in infinite dimensions and their applications into a single cohesive work. The author provides a clear and systematic introduction to the Yosida approximation method and justifies its power by presenting its applications in some practical topics such as stochastic stability and stochastic optimal control. The theory assimilated spans more than 35 years of mathematics, but is developed slowly and methodically in digestible pieces. The book begins with a motivational chapter that introduces the reader to several different models that play recurring roles throughout the book as the theory is unfolded, and invites readers from different disciplines to see immediately that the effort required to work through the theory that follows is worthwhile. From there, the author presents the necessary prerequisite material, and then launches the reader into the main discussion of the monograph, namely, Yosida approximations of SDEs, Yosida approximations of SDEs with Poisson jumps, and their applications. Most of the results considered in the main chapters appear for the first time in a book form, and contain illustrative examples on stochastic partial differential equations. The key steps are included in all proofs, especially the various estimates, which help the reader to get a true feel for the theory of Yosida approximations and their use. This work is intended for researchers and graduate students in mathematics specializing in probability theory and will appeal to numerical analysts, engineers, physicists and practitioners in finance who want to apply the theory of stochastic evolution equations. Since the approach is based mainly in semigroup theory, it is amenable to a wide audience including non-specialists in stochastic processes. |
You may like...
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Stats: Data and Models, Global Edition…
Richard De Veaux, Paul Velleman, …
Digital product license key
R1,516
Discovery Miles 15 160
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,549
Discovery Miles 25 490
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,284
Discovery Miles 22 840
|