![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
This volume contains nineteen research papers belonging to the areas of computational statistics, data mining, and their applications. Those papers, all written specifically for this volume, are their authors' contributions to honour and celebrate Professor Jacek Koronacki on the occcasion of his 70th birthday. The book's related and often interconnected topics, represent Jacek Koronacki's research interests and their evolution. They also clearly indicate how close the areas of computational statistics and data mining are.
1. Die Sprache der Wahrscheinlichkeiten.- 2. Ereignisse.- 3. Wahrscheinlichkeitsraume.- 4. Diskrete Wahrscheinlichkeiten. Abzahlungen.- 5. Zufallsvariable.- 6. Bedingte Wahrscheinlichkeit. Unabhangigkeit.- 7. Diskrete Zufallsvariable. Gebrauchliche Verteilungen.- 8. Erwartungswerte. Charakteristische Werte.- 9. Erzeugende Funktionen.- 10. Stieltjes-Lebesgue-Masse. Integrale von reellen Zufallsvariablen.- 11. Erwartungswerte. Absolut stetige Verteilungen.- 12. Zufallsvektoren. Bedingte Erwartungswerte. Normalverteilung.- 13. Erzeugende Funktionen der Momente. Charakteristische Funktionen.- 14. Die wichtigsten (absolut stetigen) Wahrscheinlichkeitsverteilungen.- 15. Verteilungen von Funktionen einer Zufallsvariablen.- 16. Stochastische Konvergenz.- 17. Gesetze der grossen Zahlen.- 18. Zentrale Rolle der Normalverteilung. Zentraler Grenzwertsatz.- 19. Gesetz vom iterierten Logarithmus.- 20. Anwendungen der Wahrscheinlichkeitsrechnung.- Loesungen der UEbungsaufgaben.
This book is designed as a gentle introduction to the fascinating field of choice modeling and its practical implementation using the R language. Discrete choice analysis is a family of methods useful to study individual decision-making. With strong theoretical foundations in consumer behavior, discrete choice models are used in the analysis of health policy, transportation systems, marketing, economics, public policy, political science, urban planning, and criminology, to mention just a few fields of application. The book does not assume prior knowledge of discrete choice analysis or R, but instead strives to introduce both in an intuitive way, starting from simple concepts and progressing to more sophisticated ideas. Loaded with a wealth of examples and code, the book covers the fundamentals of data and analysis in a progressive way. Readers begin with simple data operations and the underlying theory of choice analysis and conclude by working with sophisticated models including latent class logit models, mixed logit models, and ordinal logit models with taste heterogeneity. Data visualization is emphasized to explore both the input data as well as the results of models. This book should be of interest to graduate students, faculty, and researchers conducting empirical work using individual level choice data who are approaching the field of discrete choice analysis for the first time. In addition, it should interest more advanced modelers wishing to learn about the potential of R for discrete choice analysis. By embedding the treatment of choice modeling within the R ecosystem, readers benefit from learning about the larger R family of packages for data exploration, analysis, and visualization.
IBM SPSS Statistics 27 Step by Step: A Simple Guide and Reference, seventeenth edition, takes a straightforward, step-by-step approach that makes SPSS software clear to beginners and experienced researchers alike. Extensive use of four-color screen shots, clear writing, and step-by-step boxes guide readers through the program. Output for each procedure is explained and illustrated, and every output term is defined. Exercises at the end of each chapter support students by providing additional opportunities to practice using SPSS. This book covers the basics of statistical analysis and addresses more advanced topics such as multidimensional scaling, factor analysis, discriminant analysis, measures of internal consistency, MANOVA (between- and within-subjects), cluster analysis, Log-linear models, logistic regression, and a chapter describing residuals. The end sections include a description of data files used in exercises, an exhaustive glossary, suggestions for further reading, and a comprehensive index. IBM SPSS Statistics 27 Step by Step is distributed in 85 countries, has been an academic best seller through most of the earlier editions, and has proved an invaluable aid to thousands of researchers and students. New to this edition: Screenshots, explanations, and step-by-step boxes have been fully updated to reflect SPSS 27 A new chapter on a priori power analysis helps researchers determine the sample size needed for their research before starting data collection.
The papers in this volume represent the most timely and advanced contributions to the 2014 Joint Applied Statistics Symposium of the International Chinese Statistical Association (ICSA) and the Korean International Statistical Society (KISS), held in Portland, Oregon. The contributions cover new developments in statistical modeling and clinical research: including model development, model checking, and innovative clinical trial design and analysis. Each paper was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe. It offered 3 keynote speeches, 7 short courses, 76 parallel scientific sessions, student paper sessions, and social events.
This monograph provides a concise presentation of a mathematical approach to metastability, a wide-spread phenomenon in the dynamics of non-linear systems - physical, chemical, biological or economic - subject to the action of temporal random forces typically referred to as noise, based on potential theory of reversible Markov processes. The authors shed new light on the metastability phenomenon as a sequence of visits of the path of the process to different metastable sets, and focuses on the precise analysis of the respective hitting probabilities and hitting times of these sets. The theory is illustrated with many examples, ranging from finite-state Markov chains, finite-dimensional diffusions and stochastic partial differential equations, via mean-field dynamics with and without disorder, to stochastic spin-flip and particle-hop dynamics and probabilistic cellular automata, unveiling the common universal features of these systems with respect to their metastable behaviour. The monograph will serve both as comprehensive introduction and as reference for graduate students and researchers interested in metastability.
The book "Computational Error and Complexity in Science and
Engineering" pervades all the science and engineering disciplines
where computation occurs. Scientific and engineering computation
happens to be the interface between the mathematical model/problem
and the real world application. One needs to obtain good quality
numerical values for any real-world implementation. Just
mathematical quantities symbols are of no use to
engineers/technologists. Computational complexity of the numerical
method to solve the mathematical model, also computed along with
the solution, on the other hand, will tell us how much
computation/computational effort has been spent to achieve that
quality of result. Anyone who wants the specified physical problem
to be solved has every right to know the quality of the solution as
well as the resources spent for the solution. The computed error as
well as the complexity provide the scientific convincing answer to
these questions.
The biennial CONTROLO conferences are the main events promoted by The CONTROLO 2016 - 12th Portuguese Conference on Automatic Control, Guimaraes, Portugal, September 14th to 16th, was organized by Algoritmi, School of Engineering, University of Minho, in partnership with INESC TEC, and promoted by the Portuguese Association for Automatic Control - APCA, national member organization of the International Federation of Automatic Control - IFAC. The seventy-five papers published in this volume cover a wide range of topics. Thirty-one of them, of a more theoretical nature, are distributed among the first five parts: Control Theory; Optimal and Predictive Control; Fuzzy, Neural and Genetic Control; Modeling and Identification; Sensing and Estimation. The papers go from cutting-edge theoretical research to innovative control applications and show expressively how Automatic Control can be used to increase the well being of people.
Hardbound. This reference work covers the many aspects of Robust Inference. Much of what is contained in the chapters, written by leading experts in the field, has not been part of previous surveys of this area. Robust Inference has been an active area of research for the last two decades. Especially during recent years it has been extended in different directions covering a wide variety of models. This volume will be valuable for both graduate students and researchers using statistical methods.
Based on the proceedings of a conference on Influence Diagrams for Decision Analysis, Inference and Prediction held at the University of California at Berkeley in May of 1988, this is the first book devoted to the subject. The editors have brought together recent results from researchers actively investigating influence diagrams and also from practitioners who have used influence diagrams in developing models for problem-solving in a wide range of fields.
This Festschrift in honour of Ursula Gather's 60th birthday deals with modern topics in the field of robust statistical methods, especially for time series and regression analysis, and with statistical methods for complex data structures. The individual contributions of leading experts provide a textbook-style overview of the topic, supplemented by current research results and questions. The statistical theory and methods in this volume aim at the analysis of data which deviate from classical stringent model assumptions, which contain outlying values and/or have a complex structure. Written for researchers as well as master and PhD students with a good knowledge of statistics.
Covers the key issues required for students wishing to understand and analyse the core empirical issues in economics. It focuses on descriptive statistics, probability concepts and basic econometric techniques and has an accompanying website that contains all the data used in the examples and provides exercises for undertaking original research.
This volume reviews the theory and simulation methods of stochastic kinetics by integrating historical and recent perspectives, presents applications, mostly in the context of systems biology and also in combustion theory. In recent years, due to the development in experimental techniques, such as optical imaging, single cell analysis, and fluorescence spectroscopy, biochemical kinetic data inside single living cells have increasingly been available. The emergence of systems biology brought renaissance in the application of stochastic kinetic methods.
This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.
Between Certainty & Uncertainty is a one-of a-kind short course on statistics for students, engineers and researchers. It is a fascinating introduction to statistics and probability with notes on historical origins and 80 illustrative numerical examples organized in the five units: . Chapter 1 "Descriptive Statistics" Compressing small samples, basic averages - mean and variance, their main properties including God s proof; linear transformations and "z-scored" statistics . . Chapter 2 "Grouped data" Udny Yule s concept of qualitative and quantitative variables. Grouping these two kinds of data. Graphical tools. Combinatorial rules and qualitative variables. Designing frequency histogram. Direct and coded evaluation of quantitative data. Significance of percentiles. . Chapter 3 "Regression and correlation" Geometrical distance and equivalent distances in two orthogonal directions as a prerequisite to the concept of two regression lines. Misleading in interpreting two regression lines. Derivation of the two regression lines. Was Hubble right? Houbolt s cloud. What in fact measures the correlation coefficient? . Chapter 4 "Binomial distribution" Middle ages origins of the binomials; figurate numbers and combinatorial rules. Pascal s Arithmetical Triangle. Bernoulli s or Poisson Trials? John Arbuthnot curing binomials. How Newton taught S. Pepys probability. Jacob Bernoulli s Weak Law of Large Numbers and others. . Chapter 5 "Normal distribution and binomial heritage" Tables of the normal distribution. Abraham de Moivre and the second theorem of de Moivre-Laplace. . Chapter 1 "Descriptive Statistics" Compressing small samples, basic averages - mean and variance, their main properties including God s proof; linear transformations and "z-scored" statistics . . Chapter 2 "Grouped data" Udny Yule s concept of qualitative and quantitative variables. Grouping these two kinds of data. Graphical tools. Combinatorial rules and qualitative variables. Designing frequency histogram. Direct and coded evaluation of quantitative data. Significance of percentiles. . Chapter 3 "Regression and correlation" Geometrical distance and equivalent distances in two orthogonal directions as a prerequisite to the concept of two regression lines. Misleading in interpreting two regression lines. Derivation of the two regression lines. Was Hubble right? Houbolt s cloud. What in fact measures the correlation coefficient? . Chapter 4 "Binomial distribution" Middle ages origins of the binomials; figurate numbers and combinatorial rules. Pascal s Arithmetical Triangle. Bernoulli s or Poisson Trials? John Arbuthnot curing binomials. How Newton taught S. Pepys probability. Jacob Bernoulli s Weak Law of Large Numbers and others. . Chapter 5 "Normal distribution and binomial heritage" Tables of the normal distribution. Abraham de Moivre and the second theorem of de Moivre-Laplace. . Chapter 5 "Normal distribution and binomial heritage" Tables of the normal distribution. Abraham de Moivre and the second theorem of de Moivre-Laplace. "
This volume, which highlights recent advances in statistical methodology and applications, is divided into two main parts. The first part presents theoretical results on estimation techniques in functional statistics, while the second examines three key areas of application: estimation problems in queuing theory, an application in signal processing, and the copula approach to epidemiologic modelling. The book's peer-reviewed contributions are based on papers originally presented at the Marrakesh International Conference on Probability and Statistics held in December 2013.
An Introduction to Measure-Theoretic Probability, Second Edition, employs a classical approach to teaching the basics of measure theoretic probability. This book provides in a concise, yet detailed way, the bulk of the probabilistic tools that a student working toward an advanced degree in statistics, probability and other related areas should be equipped with. This edition requires no prior knowledge of measure theory, covers all its topics in great detail, and includes one chapter on the basics of ergodic theory and one chapter on two cases of statistical estimation. Topics range from the basic properties of a measure to modes of convergence of a sequence of random variables and their relationships; the integral of a random variable and its basic properties; standard convergence theorems; standard moment and probability inequalities; the Hahn-Jordan Decomposition Theorem; the Lebesgue Decomposition T; conditional expectation and conditional probability; theory of characteristic functions; sequences of independent random variables; and ergodic theory. There is a considerable bend toward the way probability is actually used in statistical research, finance, and other academic and nonacademic applied pursuits. Extensive exercises and practical examples are included, and all proofs are presented in full detail. Complete and detailed solutions to all exercises are available to the instructors on the book companion site. This text will be a valuable resource for graduate students primarily in statistics, mathematics, electrical and computer engineering or other information sciences, as well as for those in mathematical economics/finance in the departments of economics.
Stochastic Orders in Reliability and Risk Management is composed of 19 contributions on the theory of stochastic orders, stochastic comparison of order statistics, stochastic orders in reliability and risk analysis, and applications. These review/exploratory chapters present recent and current research on stochastic orders reported at the International Workshop on Stochastic Orders in Reliability and Risk Management, or SORR2011, which took place in the City Hotel, Xiamen, China, from June 27 to June 29, 2011. The conference's talks and invited contributions also represent the celebration of Professor Moshe Shaked, who has made comprehensive, fundamental contributions to the theory of stochastic orders and its applications in reliability, queueing modeling, operations research, economics and risk analysis. This volume is in honor of Professor Moshe Shaked. The work presented in this volume represents active research on stochastic orders and multivariate dependence, and exemplifies close collaborations between scholars working in different fields. The Xiamen Workshop and this volume seek to revive the community workshop tradition on stochastic orders and dependence and strengthen research collaboration, while honoring the work of a distinguished scholar.
This book explores different approaches to defining the concept of region depending on the specific question that needs to be answered. While the typical administrative spatial data division fits certain research questions well, in many cases, defining regions in a different way is fundamental in order to obtain significant empirical evidence. The book is divided into three parts: The first part is dedicated to a methodological discussion of the concept of region and the different potential approaches from different perspectives. The problem of having sufficient information to define different regional units is always present. This justifies the second part of the book, which focuses on the techniques of ecological inference applied to estimating disaggregated data from observable aggregates. Finally, the book closes by presenting several applications that are in line with the functional areas definition in regional analysis.
The only comprehensive guide to the theory and practice of one of
today's most important probabilistic techniques An indispensable resource for researchers in sequential analysis, Sequential Estimation is an ideal graduate-level text as well.
The main body of this book is devoted to statistical physics, whereas much less emphasis is given to thermodynamics. In particular, the idea is to present the most important outcomes of thermodynamics - most notably, the laws of thermodynamics - as conclusions from derivations in statistical physics. Special emphasis is on subjects that are vital to engineering education. These include, first of all, quantum statistics, like the Fermi-Dirac distribution, as well as diffusion processes, both of which are fundamental to a sound understanding of semiconductor devices. Another important issue for electrical engineering students is understanding of the mechanisms of noise generation and stochastic dynamics in physical systems, most notably in electric circuitry. Accordingly, the fluctuation-dissipation theorem of statistical mechanics, which is the theoretical basis for understanding thermal noise processes in systems, is presented from a signals-and-systems point of view, in a way that is readily accessible for engineering students and in relation with other courses in the electrical engineering curriculum, like courses on random processes.
This is the first book to systematically present control theory for stochastic distributed parameter systems, a comparatively new branch of mathematical control theory. The new phenomena and difficulties arising in the study of controllability and optimal control problems for this type of system are explained in detail. Interestingly enough, one has to develop new mathematical tools to solve some problems in this field, such as the global Carleman estimate for stochastic partial differential equations and the stochastic transposition method for backward stochastic evolution equations. In a certain sense, the stochastic distributed parameter control system is the most general control system in the context of classical physics. Accordingly, studying this field may also yield valuable insights into quantum control systems. A basic grasp of functional analysis, partial differential equations, and control theory for deterministic systems is the only prerequisite for reading this book.
This book provides an introduction to operational research methods and their application in the agrifood and environmental sectors. It explains the need for multicriteria decision analysis and teaches users how to use recent advances in multicriteria and clustering classification techniques in practice. Further, it presents some of the most common methodologies for statistical analysis and mathematical modeling, and discusses in detail ten examples that explain and show “hands-on” how operational research can be used in key decision-making processes at enterprises in the agricultural food and environmental industries. As such, the book offers a valuable resource especially well suited as a textbook for postgraduate courses.
The papers in this volume represent a broad, applied swath of advanced contributions to the 2015 ICSA/Graybill Applied Statistics Symposium of the International Chinese Statistical Association, held at Colorado State University in Fort Collins. The contributions cover topics that range from statistical applications in business and finance to applications in clinical trials and biomarker analysis. Each papers was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe.
Risk management for financial institutions is one of the key topics the financial industry has to deal with. The present volume is a mathematically rigorous text on solvency modeling. Currently, there are many new developments in this area in the financial and insurance industry (Basel III and Solvency II), but none of these developments provides a fully consistent and comprehensive framework for the analysis of solvency questions. Merz and Wuthrich combine ideas from financial mathematics (no-arbitrage theory, equivalent martingale measure), actuarial sciences (insurance claims modeling, cash flow valuation) and economic theory (risk aversion, probability distortion) to provide a fully consistent framework. Within this framework they then study solvency questions in incomplete markets, analyze hedging risks, and study asset-and-liability management questions, as well as issues like the limited liability options, dividend to shareholder questions, the role of re-insurance, etc. This work embeds the solvency discussion (and long-term liabilities) into a scientific framework and is intended for researchers as well as practitioners in the financial and actuarial industry, especially those in charge of internal risk management systems. Readers should have a good background in probability theory and statistics, and should be familiar with popular distributions, stochastic processes, martingales, etc. |
![]() ![]() You may like...
Geometric Aspects of Probability Theory…
V.V. Buldygin, A.B. Kharazishvili
Hardcover
R3,210
Discovery Miles 32 100
Metrical Theory of Continued Fractions
M. Iosifescu, Cor Kraaikamp
Hardcover
R3,268
Discovery Miles 32 680
Advancing Next-Generation Elementary…
Mary Grassetti, Silvy Brookby
Hardcover
R5,207
Discovery Miles 52 070
Utilizing Learning Analytics to Support…
Dirk Ifenthaler, Dana-Kristin Mah, …
Hardcover
R4,622
Discovery Miles 46 220
An Introduction to the Theory of Point…
D. J. Daley, D Vere-Jones
Hardcover
R6,532
Discovery Miles 65 320
Stochastic Simulation: Algorithms and…
Soren Asmussen, Peter W. Glynn
Hardcover
R2,155
Discovery Miles 21 550
Topics and Trends in Current Statistics…
Gail Burrill, Dani Ben-Zvi
Hardcover
R4,649
Discovery Miles 46 490
|