![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The purpose of this volume is to provide an overview of Terry Speed's contributions to statistics and beyond. Each of the fifteen chapters concerns a particular area of research and consists of a commentary by a subject-matter expert and selection of representative papers. The chapters, organized more or less chronologically in terms of Terry's career, encompass a wide variety of mathematical and statistical domains, along with their application to biology and medicine. Accordingly, earlier chapters tend to be more theoretical, covering some algebra and probability theory, while later chapters concern more recent work in genetics and genomics. The chapters also span continents and generations, as they present research done over four decades, while crisscrossing the globe. The commentaries provide insight into Terry's contributions to a particular area of research, by summarizing his work and describing its historical and scientific context, motivation, and impact. In addition to shedding light on Terry's scientific achievements, the commentaries reveal endearing aspects of his personality, such as his intellectual curiosity, energy, humor, and generosity.
Since the beginning of the seventies computer hardware is available to use programmable computers for various tasks. During the nineties the hardware has developed from the big main frames to personal workstations. Nowadays it is not only the hardware which is much more powerful, but workstations can do much more work than a main frame, compared to the seventies. In parallel we find a specialization in the software. Languages like COBOL for business orientated programming or Fortran for scientific computing only marked the beginning. The introduction of personal computers in the eighties gave new impulses for even further development, already at the beginning of the seven ties some special languages like SAS or SPSS were available for statisticians. Now that personal computers have become very popular the number of pro grams start to explode. Today we will find a wide variety of programs for almost any statistical purpose (Koch & Haag 1995)."
A comprehensive account of the statistical theory of exponential families of stochastic processes. The book reviews the progress in the field made over the last ten years or so by the authors - two of the leading experts in the field - and several other researchers. The theory is applied to a broad spectrum of examples, covering a large number of frequently applied stochastic process models with discrete as well as continuous time. To make the reading even easier for statisticians with only a basic background in the theory of stochastic process, the first part of the book is based on classical theory of stochastic processes only, while stochastic calculus is used later. Most of the concepts and tools from stochastic calculus needed when working with inference for stochastic processes are introduced and explained without proof in an appendix. This appendix can also be used independently as an introduction to stochastic calculus for statisticians. Numerous exercises are also included.
Experience gained during a ten-year long involvement in modelling, program ming and application in nonlinear optimization helped me to arrive at the conclusion that in the interest of having successful applications and efficient software production, knowing the structure of the problem to be solved is in dispensable. This is the reason why I have chosen the field in question as the sphere of my research. Since in applications, mainly from among the nonconvex optimization models, the differentiable ones proved to be the most efficient in modelling, especially in solving them with computers, I started to deal with the structure of smooth optimization problems. The book, which is a result of more than a decade of research, can be equally useful for researchers and stu dents showing interest in the domain, since the elementary notions necessary for understanding the book constitute a part of the university curriculum. I in tended dealing with the key questions of optimization theory, which endeavour, obviously, cannot bear all the marks of completeness. What I consider the most crucial point is the uniform, differential geometric treatment of various questions, which provides the reader with opportunities for learning the structure in the wide range, within optimization problems. I am grateful to my family for affording me tranquil, productive circumstances. I express my gratitude to F."
'Et moi, ..., si j'avait su comment en revenIT, One service mathematics has rendered the je n'y serais point allt\.' human race. It has put common sense back where it belongs, on the topmost shelf next Jules Verne to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'. able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. :; 'One service logic has rendered com- puter science .. :; 'One service category theory has rendered mathematics .. :. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
Geostatistics Rio 2000 includes fifteen contributions, five of which are on applications in petroleum science and ten are on mining geostatistics. These contributions were presented at the 31st International Geological Congress, held in Rio de Janeiro, Brazil, from 6-17 August, 2000. Stochastic simulation was the key theme of these case studies. A wide range of methods was used: truncated gaussian and plurigaussian, SIS and SGS, boolean methods and multi-point attractors. The five contributions on petroleum science focus on different aspects of reservoir characterisation. All use stochastic simulations to generate 3D numerical models that reproduce the key features of reservoirs. Five of the ten contributions on mining present ore-body
simulations; the others address questions like reconciling reserve
estimates with production figures. "Audience: " The volume will be of value to scientists, researchers, and professionals in geology, mining engineering, petroleum engineering, mathematics and statistics, as well as those working for mining and oil companies.
Extending the well-known connection between classical linear potential theory and probability theory (through the interplay between harmonic functions and martingales) to the nonlinear case of tug-of-war games and their related partial differential equations, this unique book collects several results in this direction and puts them in an elementary perspective in a lucid and self-contained fashion.
This volume contains twenty-eight refereed research or review papers presented at the 5th Seminar on Stochastic Processes, Random Fields and Applications, which took place at the Centro Stefano Franscini (Monte Verita) in Ascona, Switzerland, from May 30 to June 3, 2005. The seminar focused mainly on stochastic partial differential equations, random dynamical systems, infinite-dimensional analysis, approximation problems, and financial engineering. The book will be a valuable resource for researchers in stochastic analysis and professionals interested in stochastic methods in finance.
This volume features selected and peer-reviewed articles from the Pan-American Advanced Studies Institute (PASI). The chapters are written by international specialists who participated in the conference. Topics include developments based on breakthroughs in the mathematical understanding of phenomena describing systems in highly inhomogeneous and disordered media, including the KPZ universality class (describing the evolution of interfaces in two dimensions), random walks in random environment and percolative systems. PASI fosters a collaboration between North American and Latin American researchers and students. The conference that inspired this volume took place in January 2012 in both Santiago de Chile and Buenos Aires. Researchers and graduate students will find timely research in probability theory, statistical physics and related disciplines.
It is well known that the normal distribution is the most pleasant, one can even say, an exemplary object in the probability theory. It combines almost all conceivable nice properties that a distribution may ever have: symmetry, stability, indecomposability, a regular tail behavior, etc. Gaussian measures (the distributions of Gaussian random functions), as infinite-dimensional analogues of tht< classical normal distribution, go to work as such exemplary objects in the theory of Gaussian random functions. When one switches to the infinite dimension, some "one-dimensional" properties are extended almost literally, while some others should be profoundly justified, or even must be reconsidered. What is more, the infinite-dimensional situation reveals important links and structures, which either have looked trivial or have not played an independent role in the classical case. The complex of concepts and problems emerging here has become a subject of the theory of Gaussian random functions and their distributions, one of the most advanced fields of the probability science. Although the basic elements in this field were formed in the sixties-seventies, it has been still until recently when a substantial part of the corresponding material has either existed in the form of odd articles in various journals, or has served only as a background for considering some special issues in monographs.
The most comprehensive and applied discussion of stated choice experiment constructions available The Construction of Optimal Stated Choice Experiments provides an accessible introduction to the construction methods needed to create the best possible designs for use in modeling decision-making. Many aspects of the design of a generic stated choice experiment are independent of its area of application, and until now there has been no single book describing these constructions. This book begins with a brief description of the various areas where stated choice experiments are applicable, including marketing and health economics, transportation, environmental resource economics, and public welfare analysis. The authors focus on recent research results on the construction of optimal and near-optimal choice experiments and conclude with guidelines and insight on how to properly implement these results. Features of the book include: Construction of generic stated choice experiments for the estimation of main effects only, as well as experiments for the estimation of main effects plus two-factor interactions Constructions for choice sets of any size and for attributes with any number of levels A discussion of designs that contain a none option or a common base option Practical techniques for the implementation of the constructions Class-tested material that presents theoretical discussion of optimal design Complete and extensive references to the mathematical and statistical literature for the constructions Exercise sets in most chapters, which reinforce the understanding of the presented material The Construction of Optimal Stated Choice Experiments serves as aninvaluable reference guide for applied statisticians and practitioners in the areas of marketing, health economics, transport, and environmental evaluation. It is also ideal as a supplemental text for courses in the design of experiments, decision support systems, and choice models. A companion web site is available for readers to access web-based software that can be used to implement the constructions described in the book.
An exploration of the use of smoothing methods in testing the fit of parametric regression models. The book reviews many of the existing methods for testing lack-of-fit and also proposes a number of new methods, addressing both applied and theoretical aspects of the model checking problems. As such, the book is of interest to practitioners of statistics and researchers investigating either lack-of-fit tests or nonparametric smoothing ideas. The first four chapters introduce the problem of estimating regression functions by nonparametric smoothers, primarily those of kernel and Fourier series type, and could be used as the foundation for a graduate level course on nonparametric function estimation. The prerequisites for a full appreciation of the book are a modest knowledge of calculus and some familiarity with the basics of mathematical statistics.
Around the world a multitude of surveys are conducted every day, on a variety of subjects, and consequently surveys have become an accepted part of modern life. However, in recent years survey estimates have been increasingly affected by rising trends in nonresponse, with loss of accuracy as an undesirable result. Whilst it is possible to reduce nonresponse to some degree, it cannot be completely eliminated. Estimation techniques that account systematically for nonresponse and at the same time succeed in delivering acceptable accuracy are much needed. "Estimation in Surveys with Nonresponse" provides an overview of these techniques, presenting the view of nonresponse as a normal (albeit undesirable) feature of a sample survey, one whose potentially harmful effects are to be minimised. Builds in the nonresponse feature of survey data collection as an integral part of the theory, both for point estimation and for variance estimation. Promotes weighting through calibration as a new and powerful technique for surveys with nonresponse. Highlights the analysis of nonresponse bias in estimates and methods to minimize this bias. Includes computational tools to help identify the best variables for calibration. Discusses the use of imputation as a complement to weighting by calibration. Contains guidelines for dealing with frame imperfections and coverage errors. Features worked examples throughout the text, using real data. The accessible style of "Estimation in Surveys with Nonresponse" will make this an invaluable tool for survey methodologists in national statistics agencies and private survey agencies. Researchers, teachers, and students of statistics, social sciences and economicswill benefit from the clear presentation and numerous examples.
These proceedings emphasize new mathematical problems discussed in line with white noise analysis. Many papers deal with mathematical questions arising from actual phenomena. Various applications to stochastic differential equations, quantum field theory, functional integration such as Feynman integrals and limit theorems in probability are also discussed.
Separation of signal from noise is the most fundamental problem in data analysis, and arises in many fields, for example, signal processing, econometrics, acturial science, and geostatistics. This book introduces the local regression method in univariate and multivariate settings, and extensions to local likelihood and density estimation. Basic theoretical results and diagnostic tools such as cross validation are introduced along the way. Examples illustrate the implementation of the methods using the LOCFIT software.
The area of data analysis has been greatly affected by our computer age. For example, the issue of collecting and storing huge data sets has become quite simplified and has greatly affected such areas as finance and telecommunications. Even non-specialists try to analyze data sets and ask basic questions about their structure. One such question is whether one observes some type of invariance with respect to scale, a question that is closely related to the existence of long-range dependence in the data. This important topic of long-range dependence is the focus of this unique work, written by a number of specialists on the subject. The topics selected should give a good overview from the probabilistic and statistical perspective. Included will be articles on fractional Brownian motion, models, inequalities and limit theorems, periodic long-range dependence, parametric, semiparametric, and non-parametric estimation, long-memory stochastic volatility models, robust estimation, and prediction for long-range dependence sequences. For those graduate students and researchers who want to use the methodology and need to know the "tricks of the trade," there will be a special section called "Mathematical Techniques." Topics in the first part of the book are covered from probabilistic and statistical perspectives and include fractional Brownian motion, models, inequalities and limit theorems, periodic long-range dependence, parametric, semiparametric, and non-parametric estimation, long-memory stochastic volatility models, robust estimation, prediction for long-range dependence sequences. The reader is referred to more detailed proofs if already found in the literature. The last part of the book is devoted to applications in the areas of simulation, estimation and wavelet techniques, traffic in computer networks, econometry and finance, multifractal models, and hydrology. Diagrams and illustrations enhance the presentation. Each article begins with introductory background material and is accessible to mathematicians, a variety of practitioners, and graduate students. The work serves as a state-of-the art reference or graduate seminar text.
..".the text is user friendly to the topics it considers and should be very accessible...Instructors and students of statistical measure theoretic courses will appreciate the numerous informative exercises; helpful hints or solution outlines are given with many of the problems. All in all, the text should make a useful reference for professionals and students."-The Journal of the American Statistical Association
Statistical models and methods for lifetime and other time-to-event data are widely used in many fields, including medicine, the environmental sciences, actuarial science, engineering, economics, management, and the social sciences. For example, closely related statistical methods have been applied to the study of the incubation period of diseases such as AIDS, the remission time of cancers, life tables, the time-to-failure of engineering systems, employment duration, and the length of marriages. This volume contains a selection of papers based on the 1994 International Research Conference on Lifetime Data Models in Reliability and Survival Analysis, held at Harvard University. The conference brought together a varied group of researchers and practitioners to advance and promote statistical science in the many fields that deal with lifetime and other time-to-event-data. The volume illustrates the depth and diversity of the field. A few of the authors have published their conference presentations in the new journal Lifetime Data Analysis (Kluwer Academic Publishers).
In recent years, there has been an upsurge of interest in using techniques drawn from probability to tackle problems in analysis. These applications arise in subjects such as potential theory, harmonic analysis, singular integrals, and the study of analytic functions. This book presents a modern survey of these methods at the level of a beginning Ph.D. student. Highlights of this book include the construction of the Martin boundary, probabilistic proofs of the boundary Harnack principle, Dahlberg's theorem, a probabilistic proof of Riesz' theorem on the Hilbert transform, and Makarov's theorems on the support of harmonic measure. The author assumes that a reader has some background in basic real analysis, but the book includes proofs of all the results from probability theory and advanced analysis required. Each chapter concludes with exercises ranging from the routine to the difficult. In addition, there are included discussions of open problems and further avenues of research.
One service mathematics has rc: ndered the 'Et moi, "', si j'avait su comment CD revenir, je n'y serais point alle. ' human race. It has put common SCIIJC back Jules Verne where it belongs. on the topmost shelf next to tbe dusty canister 1abdled 'discarded non- The series is divergent; tberefore we may be sense'. able to do sometbing witb it Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics . . . '; 'One service logic has rendered com puter science . . . '; 'One service category theory has rendered mathematics . . . '. All arguably true_ And all statements obtainable this way form part of the raison d'etre of this series_ This series, Mathematics and Its ApplicatiOns, started in 1977. Now that over one hundred volumes have appeared it seems opportune to reexamine its scope_ At the time I wrote "Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the 'tree' of knowledge of mathematics and related fields does not grow only by putting forth new branches."
This is the first book in longitudinal categorical data analysis with parametric correlation models developed based on dynamic relationships among repeated categorical responses. This book is a natural generalization of the longitudinal binary data analysis to the multinomial data setup with more than two categories. Thus, unlike the existing books on cross-sectional categorical data analysis using log linear models, this book uses multinomial probability models both in cross-sectional and longitudinal setups. A theoretical foundation is provided for the analysis of univariate multinomial responses, by developing models systematically for the cases with no covariates as well as categorical covariates, both in cross-sectional and longitudinal setups. In the longitudinal setup, both stationary and non-stationary covariates are considered. These models have also been extended to the bivariate multinomial setup along with suitable covariates. For the inferences, the book uses the generalized quasi-likelihood as well as the exact likelihood approaches.The book is technically rigorous, and, it also presents illustrations of the statistical analysis of various real life data involving univariate multinomial responses both in cross-sectional and longitudinal setups. This book is written mainly for the graduate students and researchers in statistics and social sciences, among other applied statistics research areas. However, the rest of the book, specifically the chapters from 1 to 3, may also be used for a senior undergraduate course in statistics.
Mathematical programming has know a spectacular diversification in the last few decades. This process has happened both at the level of mathematical research and at the level of the applications generated by the solution methods that were created. To write a monograph dedicated to a certain domain of mathematical programming is, under such circumstances, especially difficult. In the present monograph we opt for the domain of fractional programming. Interest of this subject was generated by the fact that various optimization problems from engineering and economics consider the minimization of a ratio between physical and/or economical functions, for example cost/time, cost/volume, cost/profit, or other quantities that measure the efficiency of a system. For example, the productivity of industrial systems, defined as the ratio between the realized services in a system within a given period of time and the utilized resources, is used as one of the best indicators of the quality of their operation. Such problems, where the objective function appears as a ratio of functions, constitute fractional programming problem. Due to its importance in modeling various decision processes in management science, operational research, and economics, and also due to its frequent appearance in other problems that are not necessarily economical, such as information theory, numerical analysis, stochastic programming, decomposition algorithms for large linear systems, etc., the fractional programming method has received particular attention in the last three decade
This BASS book Series publishes selected high-quality papers reflecting recent advances in the design and biostatistical analysis of biopharmaceutical experiments - particularly biopharmaceutical clinical trials. The papers were selected from invited presentations at the Biopharmaceutical Applied Statistics Symposium (BASS), which was founded by the first Editor in 1994 and has since become the premier international conference in biopharmaceutical statistics. The primary aims of the BASS are: 1) to raise funding to support graduate students in biostatistics programs, and 2) to provide an opportunity for professionals engaged in pharmaceutical drug research and development to share insights into solving the problems they encounter. The BASS book series is initially divided into three volumes addressing: 1) Design of Clinical Trials; 2) Biostatistical Analysis of Clinical Trials; and 3) Pharmaceutical Applications. This book is the second of the 3-volume book series. The topics covered include: Statistical Approaches to the Meta-analysis of Randomized Clinical Trials, Collaborative Targeted Maximum Likelihood Estimation to Assess Causal Effects in Observational Studies, Generalized Tests in Clinical Trials, Discrete Time-to-event and Score-based Methods with Application to Composite Endpoint for Assessing Evidence of Disease Activity-Free , Imputing Missing Data Using a Surrogate Biomarker: Analyzing the Incidence of Endometrial Hyperplasia, Selected Statistical Issues in Patient-reported Outcomes, Network Meta-analysis, Detecting Safety Signals Among Adverse Events in Clinical Trials, Applied Meta-analysis Using R, Treatment of Missing Data in Comparative Effectiveness Research, Causal Estimands: A Common Language for Missing Data, Bayesian Subgroup Analysis with Examples, Statistical Methods in Diagnostic Devices, A Question-Based Approach to the Analysis of Safety Data, Analysis of Two-stage Adaptive Seamless Trial Design, and Multiplicity Problems in Clinical Trials - A Regulatory Perspective.
Stochastic geometry is the branch of mathematics that studies geometric structures associated with random configurations, such as random graphs, tilings and mosaics. Due to its close ties with stereology and spatial statistics, the results in this area are relevant for a large number of important applications, e.g. to the mathematical modeling and statistical analysis of telecommunication networks, geostatistics and image analysis. In recent years - due mainly to the impetus of the authors and their collaborators - a powerful connection has been established between stochastic geometry and the Malliavin calculus of variations, which is a collection of probabilistic techniques based on the properties of infinite-dimensional differential operators. This has led in particular to the discovery of a large number of new quantitative limit theorems for high-dimensional geometric objects. This unique book presents an organic collection of authoritative surveys written by the principal actors in this rapidly evolving field, offering a rigorous yet lively presentation of its many facets. |
You may like...
Directed Algebraic Topology and…
Lisbeth Fajstrup, Eric Goubault, …
Hardcover
R3,273
Discovery Miles 32 730
Managing Engineered Assets - Principles…
Joe E Amadi-Echendu
Hardcover
R2,416
Discovery Miles 24 160
Flexible Bayesian Regression Modelling
Yanan Fan, David Nott, …
Paperback
R2,427
Discovery Miles 24 270
Integral Transforms and Applications
Nita H. Shah, Monika K. Naik
Hardcover
R4,528
Discovery Miles 45 280
Urban Dynamics and Simulation Models
Denise Pumain, Romain Reuillon
Hardcover
R3,656
Discovery Miles 36 560
|