![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Approach your problems from the right end It isn't that they can't see the solution. It is and begin with the answers. Then one day, that they can't see the problem. perhaps you will find the final question. G. K. Chesterton. The Scandal of Father 'The Hermit Clad in Crane Feathers' in R. Brown 'The point of a Pin'. van Gulik's The Chinese Maze Murders. Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the "tree" of knowledge of mathematics and related fields does not grow only by putting forth new branches. It also happens, quite often in fact, that branches which were thought to be completely disparate are suddenly seen to be related. Further, the kind and level of sophistication of mathematics applied in various sciences has changed drastically in recent years: measure theory is used (non-trivially) in regional and theoretical economics; algebraic geometry interacts with physics; the Minkowsky lemma, coding theory and the structure of water meet one another in packing and covering theory; quantum fields, crystal defects and mathematical programming profit from homotopy theory; Lie algebras are relevant to filtering; and prediction and electrical engineering can use Stein spaces. And in addition to this there are such new emerging subdisciplines as "experimental mathematics," "CFD," "completely integrable systems," "chaos, synergetics and large-scale order," which are almost impossible to fit into the existing classification schemes. They draw upon widely different sections of mathematics.
This book covers a range of statistical methods useful in the analysis of medical data, from the simple to the sophisticated, and shows how they may be applied using the latest versions of S-PLUS and S-PLUS 6. In each chapter several sets of medical data are explored and analysed using a mixture of graphical and model fitting approaches. At the end of each chapter the S-PLUS script files are listed, enabling readers to reproduce all the analyses and graphics in the chapter. These script files can be downloaded from a web site. The aim of the book is to show how to use S-PLUS as a powerful environment for undertaking a variety of statistical analyses from simple inference to complex model fitting, and for providing informative graphics. All such methods are of increasing importance in handling data from a variety of medical investigations including epidemiological studies and clinical trials. The mix of real data examples and background theory make this book useful for students and researchers alike. For the former, exercises are provided at the end of each chapter to increase their fluency in using the command line language of the S-PLUS software. Professor Brian Everitt is Head of the Department of Biostatistics and Computing at the Institute of Psychiatry in London and Sophia Rabe-Hesketh is a senior lecturer in the same department. Professor Everitt is the author of over 30 books on statistics including two previously co-authored with Dr. Rabe-Hesketh.
This book presents the state of the art of biostatistical methods and their applications in clinical oncology. Many methodologies established today in biostatistics have been brought about through its applications to the design and analysis of oncology clinical studies. This field of oncology, now in the midst of evolution owing to rapid advances in biotechnologies and cancer genomics, is becoming one of the most promising disease fields in the shift toward personalized medicine. Modern developments of diagnosis and therapeutics of cancer have also been continuously fueled by recent progress in establishing the infrastructure for conducting more complex, large-scale clinical trials and observational studies. The field of cancer clinical studies therefore will continue to provide many new statistical challenges that warrant further progress in the methodology and practice of biostatistics. This book provides a systematic coverage of various stages of cancer clinical studies. Topics from modern cancer clinical trials include phase I clinical trials for combination therapies, exploratory phase II trials with multiple endpoints/treatments, and confirmative biomarker-based phase III trials with interim monitoring and adaptation. It also covers important areas of cancer screening, prognostic analysis, and the analysis of large-scale molecular data in the era of big data.
Classic biostatistics, a branch of statistical science, has as its main focus the applications of statistics in public health, the life sciences, and the pharmaceutical industry. Modern biostatistics, beyond just a simple application of statistics, is a confluence of statistics and knowledge of multiple intertwined fields. The application demands, the advancements in computer technology, and the rapid growth of life science data (e.g., genomics data) have promoted the formation of modern biostatistics. There are at least three characteristics of modern biostatistics: (1) in-depth engagement in the application fields that require penetration of knowledge across several fields, (2) high-level complexity of data because they are longitudinal, incomplete, or latent because they are heterogeneous due to a mixture of data or experiment types, because of high-dimensionality, which may make meaningful reduction impossible, or because of extremely small or large size; and (3) dynamics, the speed of development in methodology and analyses, has to match the fast growth of data with a constantly changing face. This book is written for researchers, biostatisticians/statisticians, and scientists who are interested in quantitative analyses. The goal is to introduce modern methods in biostatistics and help researchers and students quickly grasp key concepts and methods. Many methods can solve the same problem and many problems can be solved by the same method, which becomes apparent when those topics are discussed in this single volume.
This book moves systematically through the topic of applied probability from an introductory chapter to such topics as random variables and vectors, stochastic processes, estimation, testing and regression. The topics are well chosen and the presentation is enriched by many examples from real life. Each chapter concludes with many original, solved and unsolved problems and hundreds of multiple choice questions, enabling those unfamiliar with the topics to master them. Additionally appealing are historical notes on the mathematicians mentioned throughout, and a useful bibliography. A distinguishing character of the book is its thorough and succinct handling of the varied topics.
Statistical Inference for Ergodic Diffusion Processes encompasses a wealth of results from over ten years of mathematical literature. It provides a comprehensive overview of existing techniques, and presents - for the first time in book form - many new techniques and approaches. An elementary introduction to the field at the start of the book introduces a class of examples - both non-standard and classical - that reappear as the investigation progresses to illustrate the merits and demerits of the procedures. The statements of the problems are in the spirit of classical mathematical statistics, and special attention is paid to asymptotically efficient procedures. Today, diffusion processes are widely used in applied problems in fields such as physics, mechanics and, in particular, financial mathematics. This book provides a state-of-the-art reference that will prove invaluable to researchers, and graduate and postgraduate students, in areas such as financial mathematics, economics, physics, mechanics and the biomedical sciences.
This book provides a thorough development of the powerful methods of heavy traffic analysis and approximations with applications to a wide variety of stochastic (e.g. queueing and communication) networks, for both controlled and uncontrolled systems. The approximating models are reflected stochastic differential equations. The analytical and numerical methods yield considerable simplifications and insights and good approximations to both path properties and optimal controls under broad conditions on the data and structure. The general theory is developed, with possibly state dependent parameters, and specialized to many different cases of practical interest. Control problems in telecommunications and applications to scheduling, admissions control, polling, and elsewhere are treated. The necessary probability background is reviewed, including a detailed survey of reflected stochastic differential equations, weak convergence theory, methods for characterizing limit processes, and ergodic problems.
As information technologies become increasingly distributed and accessible to larger number of people and as commercial and government organizations are challenged to scale their applications and services to larger market shares, while reducing costs, there is demand for software methodologies and appli- tions to provide the following features: Richer application end-to-end functionality; Reduction of human involvement in the design and deployment of the software; Flexibility of software behaviour; and Reuse and composition of existing software applications and systems in novel or adaptive ways. When designing new distributed software systems, the above broad requi- ments and their translation into implementations are typically addressed by partial complementarities and overlapping technologies and this situation gives rise to significant software engineering challenges. Some of the challenges that may arise are: determining the components that the distributed applications should contain, organizing the application components, and determining the assumptions that one needs to make in order to implement distributed scalable and flexible applications, etc.
This is the first book on the subject since its introduction more than fifty years ago, and it can be used as a graduate text or as a reference work. It features all of the key results, many very useful tables, and a large number of research problems. The book will be of interest to those interested in one of the most fascinating areas of discrete mathematics, connected to statistics and coding theory, with applications to computer science and cryptography. It will be useful for anyone who is running experiments, whether in a chemistry lab or a manufacturing plant (trying to make those alloys stronger), or in agricultural or medical research. Sam Hedayat is Professor of Statistics and Senior Scholar in the Department of Mathematics, Statistics, and Computer Science, University of Illinois, Chicago. Neil J.A. Sloane is with AT&T Bell Labs (now AT&T Labs). John Stufken is Professor Statistics at Iowa State University.
Intended for advanced undergraduates and graduate students, this book is a practical guide to the use of probability and statistics in experimental physics. The emphasis is on applications and understanding, on theorems and techniques actually used in research. The text is not a comprehensive text in probability and statistics; proofs are sometimes omitted if they do not contribute to intuition in understanding the theorem. The problems, some with worked solutions, introduce the student to the use of computers; occasional reference is made to routines available in the CERN library, but other systems, such as Maple, can also be used. Topics covered include: basic concepts; definitions; some simple results independent of specific distributions; discrete distributions; the normal and other continuous distributions; generating and characteristic functions; the Monte Carlo method and computer simulations; multi-dimensional distributions; the central limit theorem; inverse probability and confidence belts; estimation methods; curve fitting and likelihood ratios; interpolating functions; fitting data with constraints; robust estimation methods. This second edition introduces a new method for dealing with small samples, such as may arise in search experiments, when the data are of low probability. It also includes a new chapter on queuing problems (including a simple, but useful buffer length example). In addition new sections discuss over- and under-coverage using confidence belts, the extended maximum-likelihood method, the use of confidence belts for discrete distributions, estimation of correlation coefficients, and the effective variance method for fitting y = f(x) when both x and y have measurement errors. A complete Solutions Manual is available.
Research in Bayesian analysis and statistical decision theory is rapidly expanding and diversifying, making it increasingly more difficult for any single researcher to stay up to date on all current research frontiers. This book provides a review of current research challenges and opportunities. While the book can not exhaustively cover all current research areas, it does include some exemplary discussion of most research frontiers. Topics include objective Bayesian inference, shrinkage estimation and other decision based estimation, model selection and testing, nonparametric Bayes, the interface of Bayesian and frequentist inference, data mining and machine learning, methods for categorical and spatio-temporal data analysis and posterior simulation methods. Several major application areas are covered: computer models, Bayesian clinical trial design, epidemiology, phylogenetics, bioinformatics, climate modeling and applications in political science, finance and marketing. As a review of current research in Bayesian analysis the book presents a balance between theory and applications. The lack of a clear demarcation between theoretical and applied research is a reflection of the highly interdisciplinary and often applied nature of research in Bayesian statistics. The book is intended as an update for researchers in Bayesian statistics, including non-statisticians who make use of Bayesian inference to address substantive research questions in other fields. It would also be useful for graduate students and research scholars in statistics or biostatistics who wish to acquaint themselves with current research frontiers.
These notes are based on lectures presented during the seminar on " Asymptotic Statistics" held at SchloB Reisensburg, Gunzburg, May 29-June 5, 1988. They consist of two parts, the theory of asymptotic expansions in statistics and probabilistic aspects of the asymptotic distribution theory in nonparametric statistics. Our intention is to provide a comprehensive presentation of these two subjects, leading from elementary facts to the advanced theory and recent results. Prospects for further research are also included. We would like to thank all participants for their stimulating discussions and their interest in the subjects, which made lecturing very pleasant. Special thanks are due H. Zimmer for her excellent typing. We would also like to take this opportunity to to express our thanks to the Gesellschaft fur mathematische Forschung and to the Deutsche Mathematiker Vereinigung, especially to Professor G. Fischer, for the opportunity to present these lectures and to the Birkhauser Verlag for the publication of these lecture notes. R. Bhattacharya, M. Denker Part I: Asymptotic Expansions in Statistics Rabi Bhattacharya 11 1. CRAMER-EDGEWORTH EXPANSIONS Let Q be a probability measure on (IRk, B"), B" denoting the Borel sigmafield on IR". Assume that the s - th absolute moment of Q is finite, (1.1) P. := J II x lis Q(dx) < 00, for some integer s;::: 3, and that Q is normalized, (1.2) J x(i)Q(dx) = 0 (1 ~ i ~ k), J x(i)x(j)Q(dx) = Dij (1 ~ i,j ~ k).
After Karl JAreskog's first presentation in 1970, Structural Equation Modelling or SEM has become a main statistical tool in many fields of science. It is the standard approach of factor analytic and causal modelling in such diverse fields as sociology, education, psychology, economics, management and medical sciences. In addition to an extension of its application area, Structural Equation Modelling also features a continual renewal and extension of its theoretical background. The sixteen contributions to this book, written by experts from many countries, present important new developments and interesting applications in Structural Equation Modelling. The book addresses methodologists and statisticians professionally dealing with Structural Equation Modelling to enhance their knowledge of the type of models covered and the technical problems involved in their formulation. In addition, the book offers applied researchers new ideas about the use of Structural Equation Modeling in solving their problems. Finally, methodologists, mathematicians and applied researchers alike are addressed, who simply want to update their knowledge of recent approaches in data analysis and mathematical modelling.
Bayesian Reliability presents modern methods and techniques for analyzing reliability data from a Bayesian perspective. The adoption and application of Bayesian methods in virtually all branches of science and engineering have significantly increased over the past few decades. This increase is largely due to advances in simulation-based computational tools for implementing Bayesian methods. The authors extensively use such tools throughout this book, focusing on assessing the reliability of components and systems with particular attention to hierarchical models and models incorporating explanatory variables. Such models include failure time regression models, accelerated testing models, and degradation models. The authors pay special attention to Bayesian goodness-of-fit testing, model validation, reliability test design, and assurance test planning. Throughout the book, the authors use Markov chain Monte Carlo (MCMC) algorithms for implementing Bayesian analyses -- algorithms that make the Bayesian approach to reliability computationally feasible and conceptually straightforward. This book is primarily a reference collection of modern Bayesian methods in reliability for use by reliability practitioners. There are more than 70 illustrative examples, most of which utilize real-world data. This book can also be used as a textbook for a course in reliability and contains more than 160 exercises. Noteworthy highlights of the book include Bayesian approaches for the following:
01/07 This title is now available from Walter de Gruyter. Please see www.degruyter.com for more information. Limit theorems for semimartingales form the basis of the martingale approximation approach. The methods of martingale approximation addressed in this book pertain to estimates of the rate of convergence in the central limit theorem and in the invariance principle. Some applications of martingale approximation are illustrated by the analysis of U-statistics, rank statistics, statistics of exchangeable variables and stochastic exponential statistics. Simplified results of stochastic analysis are given for use in investigations of many applied problems, including mathematical statistics, financial mathematics, mathematical biology, industrial mathematics and engineering.
The book is devoted to the new trends in random evolutions and their various applications to stochastic evolutionary sytems (SES). Such new developments as the analogue of Dynkin's formulae, boundary value problems, stochastic stability and optimal control of random evolutions, stochastic evolutionary equations driven by martingale measures are considered. The book also contains such new trends in applied probability as stochastic models of financial and insurance mathematics in an incomplete market. In the famous classical financial mathematics Black-Scholes model of a (B, S) market for securities prices, which is used for the description of the evolution of bonds and stocks prices and also for their derivatives, such as options, futures, forward contracts, etc., it is supposed that the dynamic of bonds and stocks prices are set by a linear differential and linear stochastic differential equations, respectively, with interest rate, appreciation rate and volatility such that they are predictable processes. Also, in the Arrow-Debreu economy, the securities prices which support a Radner dynamic equilibrium are a combination of an Ito process and a random point process, with the all coefficients and jumps being predictable processes."
Proceedings of the 4th Pannonian Symposium on Mathematical Statistics, Bad Tatzmannsdorf, Austria, 4-10 September 1983, Volume A.
In 1978 Edwin T. Jaynes and Myron Tribus initiated a series of workshops to exchange ideas and recent developments in technical aspects and applications of Bayesian probability theory. The first workshop was held at the University of Wyoming in 1981 organized by C.R. Smith and W.T. Grandy. Due to its success, the workshop was held annually during the last 18 years. Over the years, the emphasis of the workshop shifted gradually from fundamental concepts of Bayesian probability theory to increasingly realistic and challenging applications. The 18th international workshop on Maximum Entropy and Bayesian Methods was held in Garching / Munich (Germany) (27-31. July 1998). Opening lectures by G. Larry Bretthorst and by Myron Tribus were dedicated to one of th the pioneers of Bayesian probability theory who died on the 30 of April 1998: Edwin Thompson Jaynes. Jaynes revealed and advocated the correct meaning of 'probability' as the state of knowledge rather than a physical property. This inter pretation allowed him to unravel longstanding mysteries and paradoxes. Bayesian probability theory, "the logic of science" - as E.T. Jaynes called it - provides the framework to make the best possible scientific inference given all available exper imental and theoretical information. We gratefully acknowledge the efforts of Tribus and Bretthorst in commemorating the outstanding contributions of E.T. Jaynes to the development of probability theory."
The place in survival analysis now occupied by proportional hazards models and their generalizations is so large that it is no longer conceivable to offer a course on the subject without devoting at least half of the content to this topic alone. This book focuses on the theory and applications of a very broad class of models - proportional hazards and non-proportional hazards models, the former being viewed as a special case of the latter - which underlie modern survival analysis. Researchers and students alike will find that this text differs from most recent works in that it is mostly concerned with methodological issues rather than the analysis itself.
Linear regression is an important area of statistics, theoretical or applied. There have been a large number of estimation methods proposed and developed for linear regression. Each has its own competitive edge but none is good for all purposes. This manuscript focuses on construction of an adaptive combination of two estimation methods. The purpose of such adaptive methods is to help users make an objective choice and to combine desirable properties of two estimators.
Apart from the underlying theme that all the contributions to this volume pertain to models set in an infinite dimensional space, they differ on many counts. Some were written in the early seventies while others are reports of ongoing research done especially with this volume in mind. Some are surveys of material that can, at least at this point in time, be deemed to have attained a satisfactory solution of the problem, while oth ers represent initial forays into an original and novel formulation. Some furnish alternative proofs of known, and by now, classical results, while others can be seen as groping towards and exploring formulations that have not yet reached a definitive form. The subject matter also has a wide leeway, ranging from solution concepts for economies to those for games and also including representation of preferences and discussion of purely mathematical problems, all within the rubric of choice variables belonging to an infinite dimensional space, interpreted as a commodity space or as a strategy space. Thus, this is a collective enterprise in a fairly wide sense of the term and one with the diversity of which we have interfered as little as possible. Our motivation for bringing all of this work under one set of covers was severalfold."
This book is devoted to Corrado Gini, father of the Italian statistical school. It celebrates the 50th anniversary of his death by bearing witness to the continuing extraordinary scientific relevance of his interdisciplinary interests. The book comprises a selection of the papers presented at the conference of the Italian Statistical Society, Statistics and Demography - the Legacy of Corrado Gini, held in Treviso in September 2015. The work covers many topics linked to Gini's scientific legacy, ranging from the theory of statistical inference to multivariate statistical analysis, demography and sociology. In this volume, readers will find many interesting contributions on entropy measures, permutation procedures for the heterogeneity test, robust estimation of skew-normal parameters, S-weighted estimator, measures of multidimensional performance using Gini's delta, small-sample confidence intervals for Gini's gamma index, Bayesian estimation of the Gini-Simpson index, spatial residential patterns of selected foreign groups, minority segregation processes, dynamic time warping to study cruise tourism, and financial stress spill over. This book will appeal to all statisticians, demographers, economists, and sociologists interested in the field.
This book gives a comprehensive review of results for associated sequences and demimartingales developed so far, with special emphasis on demimartingales and related processes. Probabilistic properties of associated sequences, demimartingales and related processes are discussed in the first six chapters. Applications of some of these results to some problems in nonparametric statistical inference for such processes are investigated in the last three chapters.
The finite element method is a numerical method widely used in engineering. This reference text is the first to discuss finite element methods for structures with large stochastic variations. Graduate students, lecturers, and researchers in mathematics, engineering, and scientific computation will find this a very useful reference |
You may like...
Data Hiding Fundamentals and…
Husrev T. Sencar, Mahalingam Ramkumar, …
Hardcover
R1,644
Discovery Miles 16 440
Evolutionary Algorithms in Engineering…
Dipankar Dasgupta, Zbigniew Michalewicz
Hardcover
R2,966
Discovery Miles 29 660
Technology for Success - Computer…
Mark Ciampa, Jill West, …
Paperback
(1)
A Bibliographic Guide to Resources in…
Jeffrey R. Yost
Hardcover
|