![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This monograph presents methods for full comparative distributional analysis based on the relative distribution. This provides a general integrated framework for analysis, a graphical component that simplifies exploratory data analysis and display, a statistically valid basis for the development of hypothesis-driven summary measures, and the potential for decomposition - enabling the examination of complex hypotheses regarding the origins of distributional changes within and between groups. Written for data analysts and those interested in measurement, the text can also serve as a textbook for a course on distributional methods.
This text takes readers in a clear and progressive format from simple to recent and advanced topics in pure and applied probability such as contraction and annealed properties of non-linear semi-groups, functional entropy inequalities, empirical process convergence, increasing propagations of chaos, central limit, and Berry Esseen type theorems as well as large deviation principles for strong topologies on path-distribution spaces. Topics also include a body of powerful branching and interacting particle methods.
Trees are a fundamental object in graph theory and combinatorics as well as a basic object for data structures and algorithms in computer science. During thelastyearsresearchrelatedto(random)treeshasbeenconstantlyincreasing and several asymptotic and probabilistic techniques have been developed in order to describe characteristics of interest of large trees in di?erent settings. Thepurposeofthisbookistoprovideathoroughintroductionintovarious aspects of trees in randomsettings anda systematic treatment ofthe involved mathematicaltechniques. It shouldserveasa referencebookaswellasa basis for future research. One major conceptual aspect is to connect combinatorial and probabilistic methods that range from counting techniques (generating functions, bijections) over asymptotic methods (singularity analysis, saddle point techniques) to various sophisticated techniques in asymptotic probab- ity (convergence of stochastic processes, martingales). However, the reading of the book requires just basic knowledge in combinatorics, complex analysis, functional analysis and probability theory of master degree level. It is also part of concept of the book to provide full proofs of the major results even if they are technically involved and lengthy.
Stochastic analysis is a field of mathematical research having numerous interactions with other domains of mathematics such as partial differential equations, riemannian path spaces, dynamical systems, optimization. It also has many links with applications in engineering, finance, quantum physics, and other fields. This book covers recent and diverse aspects of stochastic and infinite-dimensional analysis. The included papers are written from a variety of standpoints (white noise analysis, Malliavin calculus, quantum stochastic calculus) by the contributors, and provide a broad coverage of the subject. This volume will be useful to graduate students and research mathematicians wishing to get acquainted with recent developments in the field of stochastic analysis.
Survival data or more general time-to-event data occur in many areas, including medicine, biology, engineering, economics, and demography, but previously standard methods have requested that all time variables are univariate and independent. This book extends the field by allowing for multivariate times. Applications where such data appear are survival of twins, survival of married couples and families, time to failure of right and left kidney for diabetic patients, life history data with time to outbreak of disease, complications and death, recurrent episodes of diseases and cross-over studies with time responses. As the field is rather new, the concepts and the possible types of data are described in detail and basic aspects of how dependence can appear in such data is discussed. Four different approaches to the analysis of such data are presented. The multi-state models where a life history is described as the subject moving from state to state is the most classical approach. The Markov models make up an important special case, but it is also described how easily more general models are set up and analyzed. Frailty models, which are random effects models for survival data, made a second approach, extending from the most simple shared frailty models, which are considered in detail, to models with more complicated dependence structures over individuals or over time. Marginal modelling has become a popular approach to evaluate the effect of explanatory factors in the presence of dependence, but without having specified a statistical model for the dependence. Finally, the completely non-parametric approach to bivariate censored survival data is described. This book is aimed at investigators who need to analyze multivariate survival data, but due to its focus on the concepts and the modelling aspects, it is also useful for persons interested in such data, but not having a statistical education. It can be used as a textbook for a graduate course in multivariate survival data. It is made from an applied point of view and covers all essential aspects of applying multivariate survival models. Also more theoretical evaluations, like asymptotic theory, are described, but only to the extent useful in applications and for understanding the models. For reading the book, it is useful, but not necessary, to have an understanding of univariate survival data. Philip Hougaard is a statistician at the pharmaceutical company Novo Nordisk. He has a Ph.D. in nonlinear regression models and is Doctor of Science based on a thesis on frailty models. He is associate editor of Biometrics and Lifetime Data Analysis. He has published over 80 papers in the statistical and medical literature.
Copulas are functions that join multivariate distribution functions to their one-dimensional margins. The study of copulas and their role in statistics is a new but vigorously growing field. In this book the student or practitioner of statistics and probability will find discussions of the fundamental properties of copulas and some of their primary applications. The applications include the study of dependence and measures of association, and the construction of families of bivariate distributions. With nearly a hundred examples and over 150 exercises, this book is suitable as a text or for self-study. The only prerequisite is an upper level undergraduate course in probability and mathematical statistics, although some familiarity with nonparametric statistics would be useful. Knowledge of measure-theoretic probability is not required. Roger B. Nelsen is Professor of Mathematics at Lewis & Clark College in Portland, Oregon. He is also the author of "Proofs Without Words: Exercises in Visual Thinking," published by the Mathematical Association of America.
This contributed volume comprises research articles and reviews on topics connected to the mathematical modeling of cellular systems. These contributions cover signaling pathways, stochastic effects, cell motility and mechanics, pattern formation processes, as well as multi-scale approaches. All authors attended the workshop on "Modeling Cellular Systems" which took place in Heidelberg in October 2014. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.
The purpose of this book is to honor the fundamental contributions to many different areas of statistics made by Barry Arnold. Distinguished and active researchers highlight some of the recent developments in statistical distribution theory, order statistics and their properties, as well as inferential methods associated with them. Applications to survival analysis, reliability, quality control, and environmental problems are emphasized.
This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in
Statistical Analysis of Observations of Increasing Dimension is devoted to the investigation of the limit distribution of the empirical generalized variance, covariance matrices, their eigenvalues and solutions of the system of linear algebraic equations with random coefficients, which are an important function of observations in multidimensional statistical analysis. A general statistical analysis is developed in which observed random vectors may not have density and their components have an arbitrary dependence structure. The methods of this theory have very important advantages in comparison with existing methods of statistical processing. The results have applications in nuclear and statistical physics, multivariate statistical analysis in the theory of the stability of solutions of stochastic differential equations, in control theory of linear stochastic systems, in linear stochastic programming, in the theory of experiment planning.
Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in "ordinal" data. As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust. Non-parametric methods have many popular applications, and are widely used in research in the fields of the behavioral sciences and biomedicine. This is a textbook on non-parametric statistics for applied research. The authors propose to use a realistic yet mostly fictional situation and series of dialogues to illustrate in detail the statistical processes required to complete data analysis. This book draws on a readers existing elementary knowledge of statistical analyses to broaden his/her research capabilities. The material within the book is covered in such a way that someone with a very limited knowledge of statistics would be able to read and understand the concepts detailed in the text. The "real world" scenario to be presented involves a multidisciplinary team of behavioral, medical, crime analysis, and policy analysis professionals work together to answer specific empirical questions regarding real-world applied problems. The reader is introduced to the team and the data set, and through the course of the text follows the team as they progress through the decision making process of narrowing the data and the research questions to answer the applied problem. In this way, abstract statistical concepts are translated into concrete and specific language. This text uses one data set from which all examples are taken. This is radically different from other statistics books which provide a varied array of examples and data sets. Using only one data set facilitates reader-directed teaching and learning by providing multiple research questions which are integrated rather than using disparate examples and completely unrelated research questions and data.
Numerical methods in finance have emerged as a vital field at the crossroads of probability theory, finance and numerical analysis. Based on presentations given at the workshop Numerical Methods in Finance held at the INRIA Bordeaux (France) on June 1-2, 2010, this book provides an overview of the major new advances in the numerical treatment of instruments with American exercises. Naturally it covers the most recent research on the mathematical theory and the practical applications of optimal stopping problems as they relate to financial applications. By extension, it also provides an original treatment of Monte Carlo methods for the recursive computation of conditional expectations and solutions of BSDEs and generalized multiple optimal stopping problems and their applications to the valuation of energy derivatives and assets. The articles were carefully written in a pedagogical style and a reasonably self-contained manner. The book is geared toward quantitative analysts, probabilists, and applied mathematicians interested in financial applications.
This book deals with the theory and the applications of a new time domain, termed natural time domain, that has been forwarded by the authors almost a decade ago (P.A. Varotsos, N.V. Sarlis and E.S. Skordas, Practica of Athens Academy 76, 294-321, 2001; Physical Review E 66, 011902, 2002). In particular, it has been found that novel dynamical features hidden behind time series in complex systems can emerge upon analyzing them in this new time domain, which conforms to the desire to reduce uncertainty and extract signal information as much as possible. The analysis in natural time enables the study of the dynamical evolution of a complex system and identifies when the system enters a critical stage. Hence, natural time plays a key role in predicting impending catastrophic events in general. Relevant examples of data analysis in this new time domain have been published during the last decade in a large variety of fields, e.g., Earth Sciences, Biology and Physics. The book explains in detail a series of such examples including the identification of the sudden cardiac death risk in Cardiology, the recognition of electric signals that precede earthquakes, the determination of the time of an impending major mainshock in Seismology, and the analysis of the avalanches of the penetration of magnetic flux into thin films of type II superconductors in Condensed Matter Physics. In general, this book is concerned with the time-series analysis of signals emitted from complex systems by means of the new time domain and provides advanced students and research workers in diverse fields with a sound grounding in the fundamentals of current research work on detecting (long-range) correlations in complex time series. Furthermore, the modern techniques of Statistical Physics in time series analysis, for example Hurst analysis, the detrended fluctuation analysis, the wavelet transform etc., are presented along with their advantages when natural time domain is employed.
The book addresses the problem of calculation of d-dimensional integrals (conditional expectations) in filter problems. It develops new methods of deterministic numerical integration, which can be used to speed up and stabilize filter algorithms. With the help of these methods, better estimates and predictions of latent variables are made possible in the fields of economics, engineering and physics. The resulting procedures are tested within four detailed simulation studies.
Praise for the Second Edition "Statistics for Research has other fine qualities besides superior organization. The examples and the statistical methods are laid out with unusual clarity by the simple device of using special formats for each. The book was written with great care and is extremely user-friendly."--The UMAP Journal Although the goals and procedures of statistical research have changed little since the Second Edition of Statistics for Research was published, the almost universal availability of personal computers and statistical computing application packages have made it possible for today's statisticians to do more in less time than ever before. The Third Edition of this bestselling text reflects how the changes in the computing environment have transformed the way statistical analyses are performed today. Based on extensive input from university statistics departments throughout the country, the authors have made several important and timely revisions, including: Additional material on probability appears early in the text New sections on odds ratios, ratio and difference estimations, repeated measure analysis, and logistic regression New examples and exercises, many from the field of the health sciences Printouts of computer analyses on all complex procedures An accompanying Web site illustrating how to use SAS(R) and JMP(R) for all procedures The text features the most commonly used statistical techniques for the analysis of research data. As in the earlier editions, emphasis is placed on how to select the proper statistical procedure and how to interpret results. Whenever possible, to avoid using the computer as a "black box" that performs a mysteriousprocess on the data, actual computational procedures are also given. A must for scientists who analyze data, professionals and researchers who need a self-teaching text, and graduate students in statistical methods, Statistics for Research, Third Edition brings the methodology up to date in a very practical and accessible way.
The purpose of this volume is to provide an overview of Terry Speed's contributions to statistics and beyond. Each of the fifteen chapters concerns a particular area of research and consists of a commentary by a subject-matter expert and selection of representative papers. The chapters, organized more or less chronologically in terms of Terry's career, encompass a wide variety of mathematical and statistical domains, along with their application to biology and medicine. Accordingly, earlier chapters tend to be more theoretical, covering some algebra and probability theory, while later chapters concern more recent work in genetics and genomics. The chapters also span continents and generations, as they present research done over four decades, while crisscrossing the globe. The commentaries provide insight into Terry's contributions to a particular area of research, by summarizing his work and describing its historical and scientific context, motivation, and impact. In addition to shedding light on Terry's scientific achievements, the commentaries reveal endearing aspects of his personality, such as his intellectual curiosity, energy, humor, and generosity.
Since the beginning of the seventies computer hardware is available to use programmable computers for various tasks. During the nineties the hardware has developed from the big main frames to personal workstations. Nowadays it is not only the hardware which is much more powerful, but workstations can do much more work than a main frame, compared to the seventies. In parallel we find a specialization in the software. Languages like COBOL for business orientated programming or Fortran for scientific computing only marked the beginning. The introduction of personal computers in the eighties gave new impulses for even further development, already at the beginning of the seven ties some special languages like SAS or SPSS were available for statisticians. Now that personal computers have become very popular the number of pro grams start to explode. Today we will find a wide variety of programs for almost any statistical purpose (Koch & Haag 1995)."
A comprehensive account of the statistical theory of exponential families of stochastic processes. The book reviews the progress in the field made over the last ten years or so by the authors - two of the leading experts in the field - and several other researchers. The theory is applied to a broad spectrum of examples, covering a large number of frequently applied stochastic process models with discrete as well as continuous time. To make the reading even easier for statisticians with only a basic background in the theory of stochastic process, the first part of the book is based on classical theory of stochastic processes only, while stochastic calculus is used later. Most of the concepts and tools from stochastic calculus needed when working with inference for stochastic processes are introduced and explained without proof in an appendix. This appendix can also be used independently as an introduction to stochastic calculus for statisticians. Numerous exercises are also included.
Experience gained during a ten-year long involvement in modelling, program ming and application in nonlinear optimization helped me to arrive at the conclusion that in the interest of having successful applications and efficient software production, knowing the structure of the problem to be solved is in dispensable. This is the reason why I have chosen the field in question as the sphere of my research. Since in applications, mainly from among the nonconvex optimization models, the differentiable ones proved to be the most efficient in modelling, especially in solving them with computers, I started to deal with the structure of smooth optimization problems. The book, which is a result of more than a decade of research, can be equally useful for researchers and stu dents showing interest in the domain, since the elementary notions necessary for understanding the book constitute a part of the university curriculum. I in tended dealing with the key questions of optimization theory, which endeavour, obviously, cannot bear all the marks of completeness. What I consider the most crucial point is the uniform, differential geometric treatment of various questions, which provides the reader with opportunities for learning the structure in the wide range, within optimization problems. I am grateful to my family for affording me tranquil, productive circumstances. I express my gratitude to F."
'Et moi, ..., si j'avait su comment en revenIT, One service mathematics has rendered the je n'y serais point allt\.' human race. It has put common sense back where it belongs, on the topmost shelf next Jules Verne to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'. able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. :; 'One service logic has rendered com- puter science .. :; 'One service category theory has rendered mathematics .. :. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
Geostatistics Rio 2000 includes fifteen contributions, five of which are on applications in petroleum science and ten are on mining geostatistics. These contributions were presented at the 31st International Geological Congress, held in Rio de Janeiro, Brazil, from 6-17 August, 2000. Stochastic simulation was the key theme of these case studies. A wide range of methods was used: truncated gaussian and plurigaussian, SIS and SGS, boolean methods and multi-point attractors. The five contributions on petroleum science focus on different aspects of reservoir characterisation. All use stochastic simulations to generate 3D numerical models that reproduce the key features of reservoirs. Five of the ten contributions on mining present ore-body
simulations; the others address questions like reconciling reserve
estimates with production figures. "Audience: " The volume will be of value to scientists, researchers, and professionals in geology, mining engineering, petroleum engineering, mathematics and statistics, as well as those working for mining and oil companies.
..".the text is user friendly to the topics it considers and should be very accessible...Instructors and students of statistical measure theoretic courses will appreciate the numerous informative exercises; helpful hints or solution outlines are given with many of the problems. All in all, the text should make a useful reference for professionals and students."-The Journal of the American Statistical Association
This volume contains twenty-eight refereed research or review papers presented at the 5th Seminar on Stochastic Processes, Random Fields and Applications, which took place at the Centro Stefano Franscini (Monte Verita) in Ascona, Switzerland, from May 30 to June 3, 2005. The seminar focused mainly on stochastic partial differential equations, random dynamical systems, infinite-dimensional analysis, approximation problems, and financial engineering. The book will be a valuable resource for researchers in stochastic analysis and professionals interested in stochastic methods in finance.
This volume features selected and peer-reviewed articles from the Pan-American Advanced Studies Institute (PASI). The chapters are written by international specialists who participated in the conference. Topics include developments based on breakthroughs in the mathematical understanding of phenomena describing systems in highly inhomogeneous and disordered media, including the KPZ universality class (describing the evolution of interfaces in two dimensions), random walks in random environment and percolative systems. PASI fosters a collaboration between North American and Latin American researchers and students. The conference that inspired this volume took place in January 2012 in both Santiago de Chile and Buenos Aires. Researchers and graduate students will find timely research in probability theory, statistical physics and related disciplines.
In recent years, there has been an upsurge of interest in using techniques drawn from probability to tackle problems in analysis. These applications arise in subjects such as potential theory, harmonic analysis, singular integrals, and the study of analytic functions. This book presents a modern survey of these methods at the level of a beginning Ph.D. student. Highlights of this book include the construction of the Martin boundary, probabilistic proofs of the boundary Harnack principle, Dahlberg's theorem, a probabilistic proof of Riesz' theorem on the Hilbert transform, and Makarov's theorems on the support of harmonic measure. The author assumes that a reader has some background in basic real analysis, but the book includes proofs of all the results from probability theory and advanced analysis required. Each chapter concludes with exercises ranging from the routine to the difficult. In addition, there are included discussions of open problems and further avenues of research. |
You may like...
Order Statistics: Applications, Volume…
Narayanaswamy Balakrishnan, C.R. Rao
Hardcover
R3,377
Discovery Miles 33 770
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Stochastic Processes and Their…
Christo Ananth, N. Anbazhagan, …
Hardcover
R6,687
Discovery Miles 66 870
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|