![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
These notes are based on lectures presented during the seminar on " Asymptotic Statistics" held at SchloB Reisensburg, Gunzburg, May 29-June 5, 1988. They consist of two parts, the theory of asymptotic expansions in statistics and probabilistic aspects of the asymptotic distribution theory in nonparametric statistics. Our intention is to provide a comprehensive presentation of these two subjects, leading from elementary facts to the advanced theory and recent results. Prospects for further research are also included. We would like to thank all participants for their stimulating discussions and their interest in the subjects, which made lecturing very pleasant. Special thanks are due H. Zimmer for her excellent typing. We would also like to take this opportunity to to express our thanks to the Gesellschaft fur mathematische Forschung and to the Deutsche Mathematiker Vereinigung, especially to Professor G. Fischer, for the opportunity to present these lectures and to the Birkhauser Verlag for the publication of these lecture notes. R. Bhattacharya, M. Denker Part I: Asymptotic Expansions in Statistics Rabi Bhattacharya 11 1. CRAMER-EDGEWORTH EXPANSIONS Let Q be a probability measure on (IRk, B"), B" denoting the Borel sigmafield on IR". Assume that the s - th absolute moment of Q is finite, (1.1) P. := J II x lis Q(dx) < 00, for some integer s;::: 3, and that Q is normalized, (1.2) J x(i)Q(dx) = 0 (1 ~ i ~ k), J x(i)x(j)Q(dx) = Dij (1 ~ i,j ~ k).
After Karl JAreskog's first presentation in 1970, Structural Equation Modelling or SEM has become a main statistical tool in many fields of science. It is the standard approach of factor analytic and causal modelling in such diverse fields as sociology, education, psychology, economics, management and medical sciences. In addition to an extension of its application area, Structural Equation Modelling also features a continual renewal and extension of its theoretical background. The sixteen contributions to this book, written by experts from many countries, present important new developments and interesting applications in Structural Equation Modelling. The book addresses methodologists and statisticians professionally dealing with Structural Equation Modelling to enhance their knowledge of the type of models covered and the technical problems involved in their formulation. In addition, the book offers applied researchers new ideas about the use of Structural Equation Modeling in solving their problems. Finally, methodologists, mathematicians and applied researchers alike are addressed, who simply want to update their knowledge of recent approaches in data analysis and mathematical modelling.
Bayesian Reliability presents modern methods and techniques for analyzing reliability data from a Bayesian perspective. The adoption and application of Bayesian methods in virtually all branches of science and engineering have significantly increased over the past few decades. This increase is largely due to advances in simulation-based computational tools for implementing Bayesian methods. The authors extensively use such tools throughout this book, focusing on assessing the reliability of components and systems with particular attention to hierarchical models and models incorporating explanatory variables. Such models include failure time regression models, accelerated testing models, and degradation models. The authors pay special attention to Bayesian goodness-of-fit testing, model validation, reliability test design, and assurance test planning. Throughout the book, the authors use Markov chain Monte Carlo (MCMC) algorithms for implementing Bayesian analyses -- algorithms that make the Bayesian approach to reliability computationally feasible and conceptually straightforward. This book is primarily a reference collection of modern Bayesian methods in reliability for use by reliability practitioners. There are more than 70 illustrative examples, most of which utilize real-world data. This book can also be used as a textbook for a course in reliability and contains more than 160 exercises. Noteworthy highlights of the book include Bayesian approaches for the following:
01/07 This title is now available from Walter de Gruyter. Please see www.degruyter.com for more information. Limit theorems for semimartingales form the basis of the martingale approximation approach. The methods of martingale approximation addressed in this book pertain to estimates of the rate of convergence in the central limit theorem and in the invariance principle. Some applications of martingale approximation are illustrated by the analysis of U-statistics, rank statistics, statistics of exchangeable variables and stochastic exponential statistics. Simplified results of stochastic analysis are given for use in investigations of many applied problems, including mathematical statistics, financial mathematics, mathematical biology, industrial mathematics and engineering.
The book is devoted to the new trends in random evolutions and their various applications to stochastic evolutionary sytems (SES). Such new developments as the analogue of Dynkin's formulae, boundary value problems, stochastic stability and optimal control of random evolutions, stochastic evolutionary equations driven by martingale measures are considered. The book also contains such new trends in applied probability as stochastic models of financial and insurance mathematics in an incomplete market. In the famous classical financial mathematics Black-Scholes model of a (B, S) market for securities prices, which is used for the description of the evolution of bonds and stocks prices and also for their derivatives, such as options, futures, forward contracts, etc., it is supposed that the dynamic of bonds and stocks prices are set by a linear differential and linear stochastic differential equations, respectively, with interest rate, appreciation rate and volatility such that they are predictable processes. Also, in the Arrow-Debreu economy, the securities prices which support a Radner dynamic equilibrium are a combination of an Ito process and a random point process, with the all coefficients and jumps being predictable processes."
Proceedings of the 4th Pannonian Symposium on Mathematical Statistics, Bad Tatzmannsdorf, Austria, 4-10 September 1983, Volume A.
In 1978 Edwin T. Jaynes and Myron Tribus initiated a series of workshops to exchange ideas and recent developments in technical aspects and applications of Bayesian probability theory. The first workshop was held at the University of Wyoming in 1981 organized by C.R. Smith and W.T. Grandy. Due to its success, the workshop was held annually during the last 18 years. Over the years, the emphasis of the workshop shifted gradually from fundamental concepts of Bayesian probability theory to increasingly realistic and challenging applications. The 18th international workshop on Maximum Entropy and Bayesian Methods was held in Garching / Munich (Germany) (27-31. July 1998). Opening lectures by G. Larry Bretthorst and by Myron Tribus were dedicated to one of th the pioneers of Bayesian probability theory who died on the 30 of April 1998: Edwin Thompson Jaynes. Jaynes revealed and advocated the correct meaning of 'probability' as the state of knowledge rather than a physical property. This inter pretation allowed him to unravel longstanding mysteries and paradoxes. Bayesian probability theory, "the logic of science" - as E.T. Jaynes called it - provides the framework to make the best possible scientific inference given all available exper imental and theoretical information. We gratefully acknowledge the efforts of Tribus and Bretthorst in commemorating the outstanding contributions of E.T. Jaynes to the development of probability theory."
The place in survival analysis now occupied by proportional hazards models and their generalizations is so large that it is no longer conceivable to offer a course on the subject without devoting at least half of the content to this topic alone. This book focuses on the theory and applications of a very broad class of models - proportional hazards and non-proportional hazards models, the former being viewed as a special case of the latter - which underlie modern survival analysis. Researchers and students alike will find that this text differs from most recent works in that it is mostly concerned with methodological issues rather than the analysis itself.
Linear regression is an important area of statistics, theoretical or applied. There have been a large number of estimation methods proposed and developed for linear regression. Each has its own competitive edge but none is good for all purposes. This manuscript focuses on construction of an adaptive combination of two estimation methods. The purpose of such adaptive methods is to help users make an objective choice and to combine desirable properties of two estimators.
Apart from the underlying theme that all the contributions to this volume pertain to models set in an infinite dimensional space, they differ on many counts. Some were written in the early seventies while others are reports of ongoing research done especially with this volume in mind. Some are surveys of material that can, at least at this point in time, be deemed to have attained a satisfactory solution of the problem, while oth ers represent initial forays into an original and novel formulation. Some furnish alternative proofs of known, and by now, classical results, while others can be seen as groping towards and exploring formulations that have not yet reached a definitive form. The subject matter also has a wide leeway, ranging from solution concepts for economies to those for games and also including representation of preferences and discussion of purely mathematical problems, all within the rubric of choice variables belonging to an infinite dimensional space, interpreted as a commodity space or as a strategy space. Thus, this is a collective enterprise in a fairly wide sense of the term and one with the diversity of which we have interfered as little as possible. Our motivation for bringing all of this work under one set of covers was severalfold."
The Handbook is a definitive reference source and teaching aid for
econometricians. It examines models, estimation theory, data
analysis and field applications in econometrics. Comprehensive
surveys, written by experts, discuss recent developments at a level
suitable for professional use by economists, econometricians,
statisticians, and in advanced graduate econometrics courses. For
more information on the Handbooks in Economics series, please see
our home page on http: //www.elsevier.nl/locate/hes
This book is devoted to Corrado Gini, father of the Italian statistical school. It celebrates the 50th anniversary of his death by bearing witness to the continuing extraordinary scientific relevance of his interdisciplinary interests. The book comprises a selection of the papers presented at the conference of the Italian Statistical Society, Statistics and Demography - the Legacy of Corrado Gini, held in Treviso in September 2015. The work covers many topics linked to Gini's scientific legacy, ranging from the theory of statistical inference to multivariate statistical analysis, demography and sociology. In this volume, readers will find many interesting contributions on entropy measures, permutation procedures for the heterogeneity test, robust estimation of skew-normal parameters, S-weighted estimator, measures of multidimensional performance using Gini's delta, small-sample confidence intervals for Gini's gamma index, Bayesian estimation of the Gini-Simpson index, spatial residential patterns of selected foreign groups, minority segregation processes, dynamic time warping to study cruise tourism, and financial stress spill over. This book will appeal to all statisticians, demographers, economists, and sociologists interested in the field.
This book gives a comprehensive review of results for associated sequences and demimartingales developed so far, with special emphasis on demimartingales and related processes. Probabilistic properties of associated sequences, demimartingales and related processes are discussed in the first six chapters. Applications of some of these results to some problems in nonparametric statistical inference for such processes are investigated in the last three chapters.
The finite element method is a numerical method widely used in engineering. This reference text is the first to discuss finite element methods for structures with large stochastic variations. Graduate students, lecturers, and researchers in mathematics, engineering, and scientific computation will find this a very useful reference
VLSI CADhas greatly bene?ted from the use of reduced ordered Binary Decision Diagrams (BDDs) and the clausal representation as a problem of Boolean Satis?ability (SAT), e.g. in logic synthesis, ver- cation or design-for-testability. In recent practical applications, BDDs are optimized with respect to new objective functions for design space exploration. The latest trends show a growing number of proposals to fuse the concepts of BDD and SAT. This book gives a modern presentation of the established as well as of recent concepts. Latest results in BDD optimization are given, c- ering di?erent aspects of paths in BDDs and the use of e?cient lower bounds during optimization. The presented algorithms include Branch ? and Bound and the generic A -algorithm as e?cient techniques to - plore large search spaces. ? The A -algorithm originates from Arti?cial Intelligence (AI), and the EDA community has been unaware of this concept for a long time. Re- ? cently, the A -algorithm has been introduced as a new paradigm to explore design spaces in VLSI CAD. Besides AI search techniques, the book also discusses the relation to another ?eld of activity bordered to VLSI CAD and BDD optimization: the clausal representation as a SAT problem.
This book provides a comprehensive review of environmental benefit transfer methods, issues and challenges, covering topics relevant to researchers and practitioners. Early chapters provide accessible introductory materials suitable for non-economists. These chapters also detail how benefit transfer is used within the policy process. Later chapters cover more advanced topics suited to valuation researchers, graduate students and those with similar knowledge of economic and statistical theory and methods. This book provides the most complete coverage of environmental benefit transfer methods available in a single location. The book targets a wide audience, including undergraduate and graduate students, practitioners in economics and other disciplines looking for a one-stop handbook covering benefit transfer topics and those who wish to apply or evaluate benefit transfer methods. It is designed for those both with and without training in economics
Multiparameter processes extend the existing one-parameter theory of random processes in an elegant way, and have found connections to diverse disciplines such as probability theory, real and functional analysis, group theory, analytic number theory, and group renormalization in mathematical physics, to name a few. This book lays the foundation of aspects of the rapidly developing subject of random fields, and is designed for a second graduate course in probability and beyond. Its intended audience is pure, as well as applied, mathematicians.
In earlier forewords to the books in this series on Discrete Event Dynamic Systems (DEDS), we have dwelt on the pervasive nature of DEDS in our human-made world. From manufacturing plants to computer/communication networks, from traffic systems to command-and-control, modern civilization cannot function without the smooth operation of such systems. Yet mathemat ical tools for the analysis and synthesis of DEDS are nascent when compared to the well developed machinery of the continuous variable dynamic systems char acterized by differential equations. The performance evaluation tool of choice for DEDS is discrete event simulation both on account of its generality and its explicit incorporation of randomness. As it is well known to students of simulation, the heart of the random event simulation is the uniform random number generator. Not so well known to the practitioners are the philosophical and mathematical bases of generating "random" number sequence from deterministic algorithms. This editor can still recall his own painful introduction to the issues during the early 80's when he attempted to do the first perturbation analysis (PA) experiments on a per sonal computer which, unbeknownst to him, had a random number generator with a period of only 32,768 numbers. It is no exaggeration to say that the development of PA was derailed for some time due to this ignorance of the fundamentals of random number generation."
Steady progress in recent years has been made in understanding the special mathematical features of certain exactly solvable models in statistical mechanics and quantum field theory, including the scaling limits of the 2-D Ising (lattice) model, and more generally, a class of 2-D quantum fields known as holonomic fields. New results have made it possible to obtain a detailed nonperturbative analysis of the multi-spin correlations. In particular, the book focuses on deformation analysis of the scaling functions of the Ising model, and will appeal to graduate students, mathematicians, and physicists interested in the mathematics of statistical mechanics and quantum field theory.
Robust statistics is an extension of classical statistics that specifically takes into account the concept that the underlying models used to describe data are only approximate. Its basic philosophy is to produce statistical procedures which are stable when the data do not exactly match the postulated models as it is the case for example with outliers. "Robust Methods in Biostatistics" proposes robust alternatives to common methods used in statistics in general and in biostatistics in particular and illustrates their use on many biomedical datasets. The methods introduced include robust estimation, testing, model selection, model check and diagnostics. They are developed for the following general classes of models: Linear regressionGeneralized linear modelsLinear mixed modelsMarginal longitudinal data modelsCox survival analysis model The methods are introduced both at a theoretical and applied level within the framework of each general class of models, with a particular emphasis put on practical data analysis. This book is of particular use for research students, applied statisticians and practitioners in the health field interested in more stable statistical techniques. An accompanying website provides R code for computing all of the methods described, as well as for analyzing all the datasets used in the book.
The book deals with some of the fundamental issues of risk assessment in grid computing environments. The book describes the development of a hybrid probabilistic and possibilistic model for assessing the success of a computing task in a grid environment
Single Subject Designs in Biomedicine draws upon the rich history of single case research within the educational and behavioral research settings and extends the application to the field of biomedicine. Biomedical illustrations are used to demonstrate the processes of designing, implementing, and evaluating a single subject design. Strengths and limitations of various methodologies are presented, along with specific clinical areas of application in which these applications would be appropriate. Statistical and visual techniques for data analysis are also discussed. The breadth and depth of information provided is suitable for medical students in research oriented courses, primary care practitioners and medical specialists seeking to apply methods of evidence practice to improve patient care, and medical researchers who are expanding their methodological expertise to include single subject designs. Increasing awareness of the utility in the single subject design could enhance treatment approach and evaluation both in biomedical research and medical care settings.
The contributions in this book focus on a variety of topics related to discrepancy theory, comprising Fourier techniques to analyze discrepancy, low discrepancy point sets for quasi-Monte Carlo integration, probabilistic discrepancy bounds, dispersion of point sets, pair correlation of sequences, integer points in convex bodies, discrepancy with respect to geometric shapes other than rectangular boxes, and also open problems in discrepany theory.
The concept of ridges has appeared numerous times in the image processing liter ature. Sometimes the term is used in an intuitive sense. Other times a concrete definition is provided. In almost all cases the concept is used for very specific ap plications. When analyzing images or data sets, it is very natural for a scientist to measure critical behavior by considering maxima or minima of the data. These critical points are relatively easy to compute. Numerical packages always provide support for root finding or optimization, whether it be through bisection, Newton's method, conjugate gradient method, or other standard methods. It has not been natural for scientists to consider critical behavior in a higher-order sense. The con cept of ridge as a manifold of critical points is a natural extension of the concept of local maximum as an isolated critical point. However, almost no attention has been given to formalizing the concept. There is a need for a formal development. There is a need for understanding the computation issues that arise in the imple mentations. The purpose of this book is to address both needs by providing a formal mathematical foundation and a computational framework for ridges. The intended audience for this book includes anyone interested in exploring the use fulness of ridges in data analysis." |
You may like...
|