![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Hardbound. This reference work covers the many aspects of Robust Inference. Much of what is contained in the chapters, written by leading experts in the field, has not been part of previous surveys of this area. Robust Inference has been an active area of research for the last two decades. Especially during recent years it has been extended in different directions covering a wide variety of models. This volume will be valuable for both graduate students and researchers using statistical methods.
This book explores different approaches to defining the concept of region depending on the specific question that needs to be answered. While the typical administrative spatial data division fits certain research questions well, in many cases, defining regions in a different way is fundamental in order to obtain significant empirical evidence. The book is divided into three parts: The first part is dedicated to a methodological discussion of the concept of region and the different potential approaches from different perspectives. The problem of having sufficient information to define different regional units is always present. This justifies the second part of the book, which focuses on the techniques of ecological inference applied to estimating disaggregated data from observable aggregates. Finally, the book closes by presenting several applications that are in line with the functional areas definition in regional analysis.
The papers in this volume represent a broad, applied swath of advanced contributions to the 2015 ICSA/Graybill Applied Statistics Symposium of the International Chinese Statistical Association, held at Colorado State University in Fort Collins. The contributions cover topics that range from statistical applications in business and finance to applications in clinical trials and biomarker analysis. Each papers was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe.
Statistics is more topical than ever. Numerous decisions depend on statistical considerations: just think of the Corona crisis or decisions about approving new drugs or other products. If researchers announce they have proved some fact using statistical tests, can we then always be sure that their claim is correct? How, and more importantly why, does statistics work? What can we expect from statistics and what not? Fact or Fluke? is not a textbook that explains statistical tests to the reader; instead, it discusses what comes before those tests: the philosophy behind the statistics. Should one carry out tests, or are there other ways to look at statistics? Ronald Meester and Klaas Slooten use a variety of examples - from court cases to theoretical physics - to present different views on statistics and provide arguments for what they think is the best point of view. This book is meant for anyone who is in some way concerned with, or interested in, statistical evidence: scientific researchers, students, teachers, mathematicians, philosophers, lawyers, managers, and no doubt many others.
This monograph provides a concise presentation of a mathematical approach to metastability, a wide-spread phenomenon in the dynamics of non-linear systems - physical, chemical, biological or economic - subject to the action of temporal random forces typically referred to as noise, based on potential theory of reversible Markov processes. The authors shed new light on the metastability phenomenon as a sequence of visits of the path of the process to different metastable sets, and focuses on the precise analysis of the respective hitting probabilities and hitting times of these sets. The theory is illustrated with many examples, ranging from finite-state Markov chains, finite-dimensional diffusions and stochastic partial differential equations, via mean-field dynamics with and without disorder, to stochastic spin-flip and particle-hop dynamics and probabilistic cellular automata, unveiling the common universal features of these systems with respect to their metastable behaviour. The monograph will serve both as comprehensive introduction and as reference for graduate students and researchers interested in metastability.
This monograph provides a self-contained and easy-to-read
introduction to non-commutative multiple-valued logic algebras; a
subject which has attracted much interest in the past few years
because of its impact on information science, artificial
intelligence and other subjects.
This is the first statistics text to address the unique issues the Marine Affairs professional and student must confront. Marine and coastal resource management is unique in that problem solutions increasingly demand an interdisciplinary approach using data from both the social and natural sciences. This is the first statistics text that addresses marine resource problems using both non-parametric and parametric techniques in a non-intimidating fashion. This is the first statistics text that addresses the unique issues the Marine Affairs professional and student must confront. Since so many of the problems faced by environmental managers are interdisciplinary, involving data and information from a host of disciplines including both natural and social sciences, this volume includes a selected number of both parametric and non-parametric statistical models. The selection of methods has been guided by the type of problems Marine Affairs professionals deal with on a day-to-day basis. The text is written for the non-mathematical reader who may have little or no prior experience in statistics or advanced mathematics. Each chapter is divided into two sections, one which describes the method, followed by one or two fully worked out examples, and concludes with a lab for student use. This volume will be of value to students and professionals involved with the description, analysis, and evaluation of coastal and marine resource issues.
The volume contains articles that should appeal to readers with computational, modeling, theoretical, and applied interests. Methodological issues include parallel computation, Hamiltonian Monte Carlo, dynamic model selection, small sample comparison of structural models, Bayesian thresholding methods in hierarchical graphical models, adaptive reversible jump MCMC, LASSO estimators, parameter expansion algorithms, the implementation of parameter and non-parameter-based approaches to variable selection, a survey of key results in objective Bayesian model selection methodology, and a careful look at the modeling of endogeneity in discrete data settings. Important contemporary questions are examined in applications in macroeconomics, finance, banking, labor economics, industrial organization, and transportation, among others, in which model uncertainty is a central consideration.
This is the first comprehensive book on information geometry, written by the founder of the field. It begins with an elementary introduction to dualistic geometry and proceeds to a wide range of applications, covering information science, engineering, and neuroscience. It consists of four parts, which on the whole can be read independently. A manifold with a divergence function is first introduced, leading directly to dualistic structure, the heart of information geometry. This part (Part I) can be apprehended without any knowledge of differential geometry. An intuitive explanation of modern differential geometry then follows in Part II, although the book is for the most part understandable without modern differential geometry. Information geometry of statistical inference, including time series analysis and semiparametric estimation (the Neyman-Scott problem), is demonstrated concisely in Part III. Applications addressed in Part IV include hot current topics in machine learning, signal processing, optimization, and neural networks. The book is interdisciplinary, connecting mathematics, information sciences, physics, and neurosciences, inviting readers to a new world of information and geometry. This book is highly recommended to graduate students and researchers who seek new mathematical methods and tools useful in their own fields.
The only comprehensive guide to the theory and practice of one of
today's most important probabilistic techniques An indispensable resource for researchers in sequential analysis, Sequential Estimation is an ideal graduate-level text as well.
This volume reviews and summarizes some of A. I. McLeod's significant contributions to time series analysis. It also contains original contributions to the field and to related areas by participants of the festschrift held in June 2014 and friends of Dr. McLeod. Covering a diverse range of state-of-the-art topics, this volume well balances applied and theoretical research across fourteen contributions by experts in the field. It will be of interest to researchers and practitioners in time series, econometricians, and graduate students in time series or econometrics, as well as environmental statisticians, data scientists, statisticians interested in graphical models, and researchers in quantitative risk management.
Stochastic Orders in Reliability and Risk Management is composed of 19 contributions on the theory of stochastic orders, stochastic comparison of order statistics, stochastic orders in reliability and risk analysis, and applications. These review/exploratory chapters present recent and current research on stochastic orders reported at the International Workshop on Stochastic Orders in Reliability and Risk Management, or SORR2011, which took place in the City Hotel, Xiamen, China, from June 27 to June 29, 2011. The conference's talks and invited contributions also represent the celebration of Professor Moshe Shaked, who has made comprehensive, fundamental contributions to the theory of stochastic orders and its applications in reliability, queueing modeling, operations research, economics and risk analysis. This volume is in honor of Professor Moshe Shaked. The work presented in this volume represents active research on stochastic orders and multivariate dependence, and exemplifies close collaborations between scholars working in different fields. The Xiamen Workshop and this volume seek to revive the community workshop tradition on stochastic orders and dependence and strengthen research collaboration, while honoring the work of a distinguished scholar.
This book offers a comprehensive and systematic introduction to the latest research on hesitant fuzzy decision-making theory. It includes six parts: the hesitant fuzzy set and its extensions, novel hesitant fuzzy measures, hesitant fuzzy hybrid weighted aggregation operators, hesitant fuzzy multiple-criteria decision-making with incomplete weights, hesitant fuzzy multiple criteria decision-making with complete weights information, and the hesitant fuzzy preference relation based decision-making theory. These methodologies are implemented in various fields such as decision-making, medical diagnosis, cluster analysis, service quality management, e-learning management and environmental management. A valuable resource for engineers, technicians, and researchers in the fields of fuzzy mathematics, operations research, information science, management science and engineering, it can also be used as a textbook for postgraduate and senior undergraduate students.
This is a collection of papers by participants at High Dimensional Probability VI Meeting held from October 9-14, 2011 at the Banff International Research Station in Banff, Alberta, Canada. High Dimensional Probability (HDP) is an area of mathematics that includes the study of probability distributions and limit theorems in infinite-dimensional spaces such as Hilbert spaces and Banach spaces. The most remarkable feature of this area is that it has resulted in the creation of powerful new tools and perspectives, whose range of application has led to interactions with other areas of mathematics, statistics, and computer science. These include random matrix theory, nonparametric statistics, empirical process theory, statistical learning theory, concentration of measure phenomena, strong and weak approximations, distribution function estimation in high dimensions, combinatorial optimization, and random graph theory. The papers in this volumeshow that HDP theory continues to develop new tools, methods, techniques and perspectives to analyze the random phenomena. Both researchers and advanced students will find this book of great use for learning about new avenues of research.
This is a unique book addressing the integration of risk methodology from various fields. It will stimulate intellectual debate and communication across disciplines, promote better risk management practices and contribute to the development of risk management methodologies. Individual chapters explain fundamental risk models and measurement, and address risk and security issues from diverse areas such as finance and insurance, the health sciences, life sciences, engineering and information science. Integrated Risk Sciences is an emerging discipline that considers risks in different fields, aiming at a common language, and at sharing and improving methods developed in different fields. Readers should have a Bachelor degree and have taken at least one basic university course in statistics and probability. The main goal of the book is to provide basic knowledge on risk and security in a common language; the authors have taken particular care to ensure that all content can readily be understood by doctoral students and researchers across disciplines. Each chapter provides simple case studies and examples, open research questions and discussion points, and a selected bibliography inviting readers to further study.
This volume, which highlights recent advances in statistical methodology and applications, is divided into two main parts. The first part presents theoretical results on estimation techniques in functional statistics, while the second examines three key areas of application: estimation problems in queuing theory, an application in signal processing, and the copula approach to epidemiologic modelling. The book's peer-reviewed contributions are based on papers originally presented at the Marrakesh International Conference on Probability and Statistics held in December 2013.
The celebrated Parisi solution of the Sherrington-Kirkpatrick model for spin glasses is one of the most important achievements in the field of disordered systems. Over the last three decades, through the efforts of theoretical physicists and mathematicians, the essential aspects of the Parisi solution were clarified and proved mathematically. The core ideas of the theory that emerged are the subject of this book, including the recent solution of the Parisi ultrametricity conjecture and a conceptually simple proof of the Parisi formula for the free energy. The treatment is self-contained and should be accessible to graduate students with a background in probability theory, with no prior knowledge of spin glasses. The methods involved in the analysis of the Sherrington-Kirkpatrick model also serve as a good illustration of such classical topics in probability as the Gaussian interpolation and concentration of measure, Poisson processes, and representation results for exchangeable arrays.
This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. lly-informed="" audience,="" and="" can="" also="" easily="" serve="" as="" textbook="" in="" graduate="" course="" departments="" such="" statistics,="" psychology,="" or="" biology.="" particular,="" the="" audience="" for="" book="" is="" teachers="" of="" practicing="" statisticians,="" applied="" quantitative="" students="" fields="" medical="" research,="" epidemiology,="" public="" health,="" biology.
This book provides an introduction to operational research methods and their application in the agrifood and environmental sectors. It explains the need for multicriteria decision analysis and teaches users how to use recent advances in multicriteria and clustering classification techniques in practice. Further, it presents some of the most common methodologies for statistical analysis and mathematical modeling, and discusses in detail ten examples that explain and show “hands-on” how operational research can be used in key decision-making processes at enterprises in the agricultural food and environmental industries. As such, the book offers a valuable resource especially well suited as a textbook for postgraduate courses.
Although there are many books on mathematical finance, few deal with the statistical aspects of modern data analysis as applied to financial problems. This textbook fills this gap by addressing some of the most challenging issues facing financial engineers. It shows how sophisticated mathematics and modern statistical techniques can be used in the solutions of concrete financial problems. Concerns of risk management are addressed by the study of extreme values, the fitting of distributions with heavy tails, the computation of values at risk (VaR), and other measures of risk. Principal component analysis (PCA), smoothing, and regression techniques are applied to the construction of yield and forward curves. Time series analysis is applied to the study of temperature options and nonparametric estimation. Nonlinear filtering is applied to Monte Carlo simulations, option pricing and earnings prediction. This textbook is intended for undergraduate students majoring in financial engineering, or graduate students in a Master in finance or MBA program. It is sprinkled with practical examples using market data, and each chapter ends with exercises. Practical examples are solved in the R computing environment. They illustrate problems occurring in the commodity, energy and weather markets, as well as the fixed income, equity and credit markets.The examples, experiments and problem setsare based on the library Rsafd developed for the purpose of the text. The book should help quantitative analysts learn and implement advanced statistical concepts. Also, it will be valuable for researchers wishing to gain experience with financial data, implement and test mathematical theories, and address practical issues that are often ignored or underestimated in academic curricula. This is the new, fully-revised edition to the book "Statistical Analysis of Financial Data in S-Plus." Rene Carmona is the Paul M. Wythes '55 Professor of Engineering and Finance at Princeton University in the department of Operations Research and Financial Engineering, and Director of Graduate Studies of the Bendheim Center for Finance. His publications include over one hundred articles and eight books in probability and statistics. He was elected Fellow of the Institute of Mathematical Statistics in 1984, and of the Society for Industrial and Applied Mathematics in 2010. He is on the editorial boardof several peer-reviewed journals and book series. Professor Carmona has developed computer programs for teaching statistics and research in signal analysis and financial engineering. He has workedfor many years on energy, the commodity markets and more recently in environmental economics, and he is recognized as a leadingresearcher and expert in these areas."
The main body of this book is devoted to statistical physics, whereas much less emphasis is given to thermodynamics. In particular, the idea is to present the most important outcomes of thermodynamics - most notably, the laws of thermodynamics - as conclusions from derivations in statistical physics. Special emphasis is on subjects that are vital to engineering education. These include, first of all, quantum statistics, like the Fermi-Dirac distribution, as well as diffusion processes, both of which are fundamental to a sound understanding of semiconductor devices. Another important issue for electrical engineering students is understanding of the mechanisms of noise generation and stochastic dynamics in physical systems, most notably in electric circuitry. Accordingly, the fluctuation-dissipation theorem of statistical mechanics, which is the theoretical basis for understanding thermal noise processes in systems, is presented from a signals-and-systems point of view, in a way that is readily accessible for engineering students and in relation with other courses in the electrical engineering curriculum, like courses on random processes.
New Perspectives in Partial Least Squares and Related Methods shares original, peer-reviewed research from presentations during the 2012 partial least squares methods meeting (PLS 2012). This was the 7th meeting in the series of PLS conferences and the first to take place in the USA. PLS is an abbreviation for Partial Least Squares and is also sometimes expanded as projection to latent structures. This is an approach for modeling relations between data matrices of different types of variables measured on the same set of objects. The twenty-two papers in this volume, which include three invited contributions from our keynote speakers, provide a comprehensive overview of the current state of the most advanced research related to PLS and related methods. Prominent scientists from around the world took part in PLS 2012 and their contributions covered the multiple dimensions of the partial least squares-based methods. These exciting theoretical developments ranged from partial least squares regression and correlation, component based path modeling to regularized regression and subspace visualization. In following the tradition of the six previous PLS meetings, these contributions also included a large variety of PLS approaches such as PLS metamodels, variable selection, sparse PLS regression, distance based PLS, significance vs. reliability, and non-linear PLS. Finally, these contributions applied PLS methods to data originating from the traditional econometric/economic data to genomics data, brain images, information systems, epidemiology, and chemical spectroscopy. Such a broad and comprehensive volume will also encourage new uses of PLS models in work by researchers and students in many fields. |
You may like...
Genres on the Web - Computational Models…
Alexander Mehler, Serge Sharoff, …
Hardcover
R4,212
Discovery Miles 42 120
Language in Complexity - The Emerging…
Francesco La Mantia, Ignazio Licata, …
Hardcover
Bayesian Natural Language Semantics and…
Henk Zeevat, Hans-Christian Schmitz
Hardcover
R3,359
Discovery Miles 33 590
Visualizing the Semantic Web - XML-based…
Vladimir Geroimenko, Chaomei Chen
Hardcover
R2,682
Discovery Miles 26 820
Where Humans Meet Machines - Innovative…
Amy Neustein, Judith A. Markowitz
Hardcover
R2,691
Discovery Miles 26 910
New Developments in Parsing Technology
H Bunt, John Carroll, …
Hardcover
R4,233
Discovery Miles 42 330
Semantic Agent Systems - Foundations and…
Atilla Elci, Mamadou Tadiou Kone, …
Hardcover
R4,186
Discovery Miles 41 860
|