![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The papers in this volume represent the most timely and advanced contributions to the 2014 Joint Applied Statistics Symposium of the International Chinese Statistical Association (ICSA) and the Korean International Statistical Society (KISS), held in Portland, Oregon. The contributions cover new developments in statistical modeling and clinical research: including model development, model checking, and innovative clinical trial design and analysis. Each paper was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe. It offered 3 keynote speeches, 7 short courses, 76 parallel scientific sessions, student paper sessions, and social events.
This Festschrift resulted from a workshop on "Advanced Modelling in Mathematical Finance" held in honour of Ernst Eberlein's 70th birthday, from 20 to 22 May 2015 in Kiel, Germany. It includes contributions by several invited speakers at the workshop, including several of Ernst Eberlein's long-standing collaborators and former students. Advanced mathematical techniques play an ever-increasing role in modern quantitative finance. Written by leading experts from academia and financial practice, this book offers state-of-the-art papers on the application of jump processes in mathematical finance, on term-structure modelling, and on statistical aspects of financial modelling. It is aimed at graduate students and researchers interested in mathematical finance, as well as practitioners wishing to learn about the latest developments.
1. Die Sprache der Wahrscheinlichkeiten.- 2. Ereignisse.- 3. Wahrscheinlichkeitsraume.- 4. Diskrete Wahrscheinlichkeiten. Abzahlungen.- 5. Zufallsvariable.- 6. Bedingte Wahrscheinlichkeit. Unabhangigkeit.- 7. Diskrete Zufallsvariable. Gebrauchliche Verteilungen.- 8. Erwartungswerte. Charakteristische Werte.- 9. Erzeugende Funktionen.- 10. Stieltjes-Lebesgue-Masse. Integrale von reellen Zufallsvariablen.- 11. Erwartungswerte. Absolut stetige Verteilungen.- 12. Zufallsvektoren. Bedingte Erwartungswerte. Normalverteilung.- 13. Erzeugende Funktionen der Momente. Charakteristische Funktionen.- 14. Die wichtigsten (absolut stetigen) Wahrscheinlichkeitsverteilungen.- 15. Verteilungen von Funktionen einer Zufallsvariablen.- 16. Stochastische Konvergenz.- 17. Gesetze der grossen Zahlen.- 18. Zentrale Rolle der Normalverteilung. Zentraler Grenzwertsatz.- 19. Gesetz vom iterierten Logarithmus.- 20. Anwendungen der Wahrscheinlichkeitsrechnung.- Loesungen der UEbungsaufgaben.
This is the first comprehensive book on information geometry, written by the founder of the field. It begins with an elementary introduction to dualistic geometry and proceeds to a wide range of applications, covering information science, engineering, and neuroscience. It consists of four parts, which on the whole can be read independently. A manifold with a divergence function is first introduced, leading directly to dualistic structure, the heart of information geometry. This part (Part I) can be apprehended without any knowledge of differential geometry. An intuitive explanation of modern differential geometry then follows in Part II, although the book is for the most part understandable without modern differential geometry. Information geometry of statistical inference, including time series analysis and semiparametric estimation (the Neyman-Scott problem), is demonstrated concisely in Part III. Applications addressed in Part IV include hot current topics in machine learning, signal processing, optimization, and neural networks. The book is interdisciplinary, connecting mathematics, information sciences, physics, and neurosciences, inviting readers to a new world of information and geometry. This book is highly recommended to graduate students and researchers who seek new mathematical methods and tools useful in their own fields.
This Festschrift in honour of Ursula Gather's 60th birthday deals with modern topics in the field of robust statistical methods, especially for time series and regression analysis, and with statistical methods for complex data structures. The individual contributions of leading experts provide a textbook-style overview of the topic, supplemented by current research results and questions. The statistical theory and methods in this volume aim at the analysis of data which deviate from classical stringent model assumptions, which contain outlying values and/or have a complex structure. Written for researchers as well as master and PhD students with a good knowledge of statistics.
Based on the proceedings of a conference on Influence Diagrams for Decision Analysis, Inference and Prediction held at the University of California at Berkeley in May of 1988, this is the first book devoted to the subject. The editors have brought together recent results from researchers actively investigating influence diagrams and also from practitioners who have used influence diagrams in developing models for problem-solving in a wide range of fields.
Hardbound. This reference work covers the many aspects of Robust Inference. Much of what is contained in the chapters, written by leading experts in the field, has not been part of previous surveys of this area. Robust Inference has been an active area of research for the last two decades. Especially during recent years it has been extended in different directions covering a wide variety of models. This volume will be valuable for both graduate students and researchers using statistical methods.
This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.
This volume contains nineteen research papers belonging to the areas of computational statistics, data mining, and their applications. Those papers, all written specifically for this volume, are their authors' contributions to honour and celebrate Professor Jacek Koronacki on the occcasion of his 70th birthday. The book's related and often interconnected topics, represent Jacek Koronacki's research interests and their evolution. They also clearly indicate how close the areas of computational statistics and data mining are.
Between Certainty & Uncertainty is a one-of a-kind short course on statistics for students, engineers and researchers. It is a fascinating introduction to statistics and probability with notes on historical origins and 80 illustrative numerical examples organized in the five units: . Chapter 1 "Descriptive Statistics" Compressing small samples, basic averages - mean and variance, their main properties including God s proof; linear transformations and "z-scored" statistics . . Chapter 2 "Grouped data" Udny Yule s concept of qualitative and quantitative variables. Grouping these two kinds of data. Graphical tools. Combinatorial rules and qualitative variables. Designing frequency histogram. Direct and coded evaluation of quantitative data. Significance of percentiles. . Chapter 3 "Regression and correlation" Geometrical distance and equivalent distances in two orthogonal directions as a prerequisite to the concept of two regression lines. Misleading in interpreting two regression lines. Derivation of the two regression lines. Was Hubble right? Houbolt s cloud. What in fact measures the correlation coefficient? . Chapter 4 "Binomial distribution" Middle ages origins of the binomials; figurate numbers and combinatorial rules. Pascal s Arithmetical Triangle. Bernoulli s or Poisson Trials? John Arbuthnot curing binomials. How Newton taught S. Pepys probability. Jacob Bernoulli s Weak Law of Large Numbers and others. . Chapter 5 "Normal distribution and binomial heritage" Tables of the normal distribution. Abraham de Moivre and the second theorem of de Moivre-Laplace. . Chapter 1 "Descriptive Statistics" Compressing small samples, basic averages - mean and variance, their main properties including God s proof; linear transformations and "z-scored" statistics . . Chapter 2 "Grouped data" Udny Yule s concept of qualitative and quantitative variables. Grouping these two kinds of data. Graphical tools. Combinatorial rules and qualitative variables. Designing frequency histogram. Direct and coded evaluation of quantitative data. Significance of percentiles. . Chapter 3 "Regression and correlation" Geometrical distance and equivalent distances in two orthogonal directions as a prerequisite to the concept of two regression lines. Misleading in interpreting two regression lines. Derivation of the two regression lines. Was Hubble right? Houbolt s cloud. What in fact measures the correlation coefficient? . Chapter 4 "Binomial distribution" Middle ages origins of the binomials; figurate numbers and combinatorial rules. Pascal s Arithmetical Triangle. Bernoulli s or Poisson Trials? John Arbuthnot curing binomials. How Newton taught S. Pepys probability. Jacob Bernoulli s Weak Law of Large Numbers and others. . Chapter 5 "Normal distribution and binomial heritage" Tables of the normal distribution. Abraham de Moivre and the second theorem of de Moivre-Laplace. . Chapter 5 "Normal distribution and binomial heritage" Tables of the normal distribution. Abraham de Moivre and the second theorem of de Moivre-Laplace. "
This book is intended to make recent results on the derivation of higher order numerical schemes for random ordinary differential equations (RODEs) available to a broader readership, and to familiarize readers with RODEs themselves as well as the closely associated theory of random dynamical systems. In addition, it demonstrates how RODEs are being used in the biological sciences, where non-Gaussian and bounded noise are often more realistic than the Gaussian white noise in stochastic differential equations (SODEs). RODEs are used in many important applications and play a fundamental role in the theory of random dynamical systems. They can be analyzed pathwise with deterministic calculus, but require further treatment beyond that of classical ODE theory due to the lack of smoothness in their time variable. Although classical numerical schemes for ODEs can be used pathwise for RODEs, they rarely attain their traditional order since the solutions of RODEs do not have sufficient smoothness to have Taylor expansions in the usual sense. However, Taylor-like expansions can be derived for RODEs using an iterated application of the appropriate chain rule in integral form, and represent the starting point for the systematic derivation of consistent higher order numerical schemes for RODEs. The book is directed at a wide range of readers in applied and computational mathematics and related areas as well as readers who are interested in the applications of mathematical models involving random effects, in particular in the biological sciences.The level of this book is suitable for graduate students in applied mathematics and related areas, computational sciences and systems biology. A basic knowledge of ordinary differential equations and numerical analysis is required.
This is the first statistics text to address the unique issues the Marine Affairs professional and student must confront. Marine and coastal resource management is unique in that problem solutions increasingly demand an interdisciplinary approach using data from both the social and natural sciences. This is the first statistics text that addresses marine resource problems using both non-parametric and parametric techniques in a non-intimidating fashion. This is the first statistics text that addresses the unique issues the Marine Affairs professional and student must confront. Since so many of the problems faced by environmental managers are interdisciplinary, involving data and information from a host of disciplines including both natural and social sciences, this volume includes a selected number of both parametric and non-parametric statistical models. The selection of methods has been guided by the type of problems Marine Affairs professionals deal with on a day-to-day basis. The text is written for the non-mathematical reader who may have little or no prior experience in statistics or advanced mathematics. Each chapter is divided into two sections, one which describes the method, followed by one or two fully worked out examples, and concludes with a lab for student use. This volume will be of value to students and professionals involved with the description, analysis, and evaluation of coastal and marine resource issues.
The book "Computational Error and Complexity in Science and
Engineering" pervades all the science and engineering disciplines
where computation occurs. Scientific and engineering computation
happens to be the interface between the mathematical model/problem
and the real world application. One needs to obtain good quality
numerical values for any real-world implementation. Just
mathematical quantities symbols are of no use to
engineers/technologists. Computational complexity of the numerical
method to solve the mathematical model, also computed along with
the solution, on the other hand, will tell us how much
computation/computational effort has been spent to achieve that
quality of result. Anyone who wants the specified physical problem
to be solved has every right to know the quality of the solution as
well as the resources spent for the solution. The computed error as
well as the complexity provide the scientific convincing answer to
these questions.
This book explores different approaches to defining the concept of region depending on the specific question that needs to be answered. While the typical administrative spatial data division fits certain research questions well, in many cases, defining regions in a different way is fundamental in order to obtain significant empirical evidence. The book is divided into three parts: The first part is dedicated to a methodological discussion of the concept of region and the different potential approaches from different perspectives. The problem of having sufficient information to define different regional units is always present. This justifies the second part of the book, which focuses on the techniques of ecological inference applied to estimating disaggregated data from observable aggregates. Finally, the book closes by presenting several applications that are in line with the functional areas definition in regional analysis.
The papers in this volume represent a broad, applied swath of advanced contributions to the 2015 ICSA/Graybill Applied Statistics Symposium of the International Chinese Statistical Association, held at Colorado State University in Fort Collins. The contributions cover topics that range from statistical applications in business and finance to applications in clinical trials and biomarker analysis. Each papers was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe.
The revision of this well-respected text presents a balanced approach of the classical and Bayesian methods and now includes a chapter on simulation (including Markov chain Monte Carlo and the Bootstrap), coverage of residual analysis in linear models, and many examples using real data. Probability & Statistics, Fourth Edition, was written for a one- or two-semester probability and statistics course. This course is offered primarily at four-year institutions and taken mostly by sophomore and junior level students majoring in mathematics or statistics. Calculus is a prerequisite, and a familiarity with the concepts and elementary properties of vectors and matrices is a plus.
The only comprehensive guide to the theory and practice of one of
today's most important probabilistic techniques An indispensable resource for researchers in sequential analysis, Sequential Estimation is an ideal graduate-level text as well.
This monograph provides a concise presentation of a mathematical approach to metastability, a wide-spread phenomenon in the dynamics of non-linear systems - physical, chemical, biological or economic - subject to the action of temporal random forces typically referred to as noise, based on potential theory of reversible Markov processes. The authors shed new light on the metastability phenomenon as a sequence of visits of the path of the process to different metastable sets, and focuses on the precise analysis of the respective hitting probabilities and hitting times of these sets. The theory is illustrated with many examples, ranging from finite-state Markov chains, finite-dimensional diffusions and stochastic partial differential equations, via mean-field dynamics with and without disorder, to stochastic spin-flip and particle-hop dynamics and probabilistic cellular automata, unveiling the common universal features of these systems with respect to their metastable behaviour. The monograph will serve both as comprehensive introduction and as reference for graduate students and researchers interested in metastability.
This monograph provides a self-contained and easy-to-read
introduction to non-commutative multiple-valued logic algebras; a
subject which has attracted much interest in the past few years
because of its impact on information science, artificial
intelligence and other subjects.
This volume reviews and summarizes some of A. I. McLeod's significant contributions to time series analysis. It also contains original contributions to the field and to related areas by participants of the festschrift held in June 2014 and friends of Dr. McLeod. Covering a diverse range of state-of-the-art topics, this volume well balances applied and theoretical research across fourteen contributions by experts in the field. It will be of interest to researchers and practitioners in time series, econometricians, and graduate students in time series or econometrics, as well as environmental statisticians, data scientists, statisticians interested in graphical models, and researchers in quantitative risk management.
Stochastic Orders in Reliability and Risk Management is composed of 19 contributions on the theory of stochastic orders, stochastic comparison of order statistics, stochastic orders in reliability and risk analysis, and applications. These review/exploratory chapters present recent and current research on stochastic orders reported at the International Workshop on Stochastic Orders in Reliability and Risk Management, or SORR2011, which took place in the City Hotel, Xiamen, China, from June 27 to June 29, 2011. The conference's talks and invited contributions also represent the celebration of Professor Moshe Shaked, who has made comprehensive, fundamental contributions to the theory of stochastic orders and its applications in reliability, queueing modeling, operations research, economics and risk analysis. This volume is in honor of Professor Moshe Shaked. The work presented in this volume represents active research on stochastic orders and multivariate dependence, and exemplifies close collaborations between scholars working in different fields. The Xiamen Workshop and this volume seek to revive the community workshop tradition on stochastic orders and dependence and strengthen research collaboration, while honoring the work of a distinguished scholar.
This book offers a comprehensive and systematic introduction to the latest research on hesitant fuzzy decision-making theory. It includes six parts: the hesitant fuzzy set and its extensions, novel hesitant fuzzy measures, hesitant fuzzy hybrid weighted aggregation operators, hesitant fuzzy multiple-criteria decision-making with incomplete weights, hesitant fuzzy multiple criteria decision-making with complete weights information, and the hesitant fuzzy preference relation based decision-making theory. These methodologies are implemented in various fields such as decision-making, medical diagnosis, cluster analysis, service quality management, e-learning management and environmental management. A valuable resource for engineers, technicians, and researchers in the fields of fuzzy mathematics, operations research, information science, management science and engineering, it can also be used as a textbook for postgraduate and senior undergraduate students.
This is a collection of papers by participants at High Dimensional Probability VI Meeting held from October 9-14, 2011 at the Banff International Research Station in Banff, Alberta, Canada. High Dimensional Probability (HDP) is an area of mathematics that includes the study of probability distributions and limit theorems in infinite-dimensional spaces such as Hilbert spaces and Banach spaces. The most remarkable feature of this area is that it has resulted in the creation of powerful new tools and perspectives, whose range of application has led to interactions with other areas of mathematics, statistics, and computer science. These include random matrix theory, nonparametric statistics, empirical process theory, statistical learning theory, concentration of measure phenomena, strong and weak approximations, distribution function estimation in high dimensions, combinatorial optimization, and random graph theory. The papers in this volumeshow that HDP theory continues to develop new tools, methods, techniques and perspectives to analyze the random phenomena. Both researchers and advanced students will find this book of great use for learning about new avenues of research.
This is a unique book addressing the integration of risk methodology from various fields. It will stimulate intellectual debate and communication across disciplines, promote better risk management practices and contribute to the development of risk management methodologies. Individual chapters explain fundamental risk models and measurement, and address risk and security issues from diverse areas such as finance and insurance, the health sciences, life sciences, engineering and information science. Integrated Risk Sciences is an emerging discipline that considers risks in different fields, aiming at a common language, and at sharing and improving methods developed in different fields. Readers should have a Bachelor degree and have taken at least one basic university course in statistics and probability. The main goal of the book is to provide basic knowledge on risk and security in a common language; the authors have taken particular care to ensure that all content can readily be understood by doctoral students and researchers across disciplines. Each chapter provides simple case studies and examples, open research questions and discussion points, and a selected bibliography inviting readers to further study. |
You may like...
Group Sequential Methods with…
Christopher Jennison, Bruce W. Turnbull
Hardcover
R4,663
Discovery Miles 46 630
Mem-elements for Neuromorphic Circuits…
Christos Volos, Viet-Thanh Pham
Paperback
R3,613
Discovery Miles 36 130
Light from Light - Cosmology and the…
Judith L. Carey
Hardcover
Principles of Big Graph: In-depth…
Ripon Patgiri, Ganesh Chandra Deka, …
Hardcover
R3,925
Discovery Miles 39 250
|