![]() |
![]() |
Your cart is empty |
||
Showing 1 - 25 of 30 matches in All Departments
This book describes the properties of stochastic probabilistic models and develops the applied mathematics of stochastic point processes. It is useful to students and research workers in probability and statistics and also to research workers wishing to apply stochastic point processes.
Identifying the sources and measuring the impact of haphazard variations are important in any number of research applications, from clinical trials and genetics to industrial design and psychometric testing. Only in very simple situations can such variations be represented effectively by independent, identically distributed random variables or by random sampling from a hypothetical infinite population.
The first edition of this book (1970) set out a systematic basis for the analysis of binary data and in particular for the study of how the probability of 'success' depends on explanatory variables. The first edition has been widely used and the general level and style have been preserved in the second edition, which contains a substantial amount of new material. This amplifies matters dealt with only cryptically in the first edition and includes many more recent developments. In addition the whole material has been reorganized, in particular to put more emphasis on m.aximum likelihood methods.
This monograph contains many ideas on the analysis of survival data to present a comprehensive account of the field. The value of survival analysis is not confined to medical statistics, where the benefit of the analysis of data on such factors as life expectancy and duration of periods of freedom from symptoms of a disease as related to a treatment applied individual histories and so on, is obvious. The techniques also find important applications in industrial life testing and a range of subjects from physics to econometrics. In the eleven chapters of the book the methods and applications of are discussed and illustrated by examples.
Large observational studies involving research questions that require the measurement of several features on each individual arise in many fields including the social and medical sciences. This book sets out both the general concepts and the more technical statistical issues involved in analysis and interpretation. Numerous illustrative examples are described in outline and four studies are discussed in some detail. The use of graphical representations of dependencies and independencies among the features under study is stressed, both to incorporate available knowledge at the planning stage of an analysis and to summarize aspects important for interpretation after detailed statistical analysis is complete. This book is aimed at research workers using statistical methods as well as statisticians involved in empirical research.
Identifying the sources and measuring the impact of haphazard variations are important in any number of research applications, from clinical trials and genetics to industrial design and psychometric testing. Only in very simple situations can such variations be represented effectively by independent, identically distributed random variables or by random sampling from a hypothetical infinite population. Components of Variance illuminates the complexities of the subject, setting forth its principles with focus on both the development of models for detailed analyses and the statistical techniques themselves. The authors first consider balanced and unbalanced situations, then move to the treatment of non-normal data, beginning with the Poisson and binomial models and followed by extensions to survival data and more general situations. In the final chapter, they discuss ways of extending and assessing various models, including the study of exceedances, the use of nonlinear representations, the study of transformations of the response variable, and the detailed examination of the distributional form of the underlying random variables. Careful signposting and numerous examples from genetic data analysis, clinical trial design, longitudinal data analysis, industrial design, and meta-analysis make this book accessible - and valuable - not only to statisticians but to all applied research scientists who use statistical methods.
This is a classic book on Queues. First published in 1961 it is clearly and concisely introduces the theory of queueing systems and is still just as relevant today. The monograph is aimed at both students and operational research workers concerned with the practical investigations of queueing, although almost every statistician will find its contents of interest.
This book should be of interest to senior undergraduate and postgraduate students of applied statistics.
A text that stresses the general concepts of the theory of statistics Theoretical Statistics provides a systematic statement of the theory of statistics, emphasizing general concepts rather than mathematical rigor. Chapters 1 through 3 provide an overview of statistics and discuss some of the basic philosophical ideas and problems behind statistical procedures. Chapters 4 and 5 cover hypothesis testing with simple and null hypotheses, respectively. Subsequent chapters discuss non-parametrics, interval estimation, point estimation, asymptotics, Bayesian procedure, and deviation theory. Student familiarity with standard statistical techniques is assumed.
The case-control approach is a powerful method for investigating factors that may explain a particular event. It is extensively used in epidemiology to study disease incidence, one of the best-known examples being Bradford Hill and Doll's investigation of the possible connection between cigarette smoking and lung cancer. More recently, case-control studies have been increasingly used in other fields, including sociology and econometrics. With a particular focus on statistical analysis, this book is ideal for applied and theoretical statisticians wanting an up-to-date introduction to the field. It covers the fundamentals of case-control study design and analysis as well as more recent developments, including two-stage studies, case-only studies and methods for case-control sampling in time. The latter have important applications in large prospective cohorts which require case-control sampling designs to make efficient use of resources. More theoretical background is provided in an appendix for those new to the field.
Applied statistics is more than data analysis, but it is easy to lose sight of the big picture. David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation. As you advance from research or policy question, to study design, through modelling and interpretation, and finally to meaningful conclusions, this book will be a valuable guide. Over a hundred illustrations from a wide variety of real applications make the conceptual points concrete, illuminating your path and deepening your understanding. This book is essential reading for anyone who makes extensive use of statistical methods in their work.
The analysis prediction and interpolation of economic and other time series has a long history and many applications. Major new developments are taking place, driven partly by the need to analyze financial data. The five papers in this book describe those new developments from various viewpoints and are intended to be an introduction accessible to readers from a range of backgrounds. The book arises out of the second Seminaire European de Statistique (SEMSTAT) held in Oxford in December 1994. This brought together young statisticians from across Europe, and a series of introductory lectures were given on topics at the forefront of current research activity. The lectures form the basis for the five papers contained in the book. The papers by Shephard and Johansen deal respectively with time series models for volatility, i.e. variance heterogeneity, and with cointegration. Clements and Hendry analyze the nature of prediction errors. A complementary review paper by Laird gives a biometrical view of the analysis of short time series. Finally Astrup and Nielsen give a mathematical introduction to the study of option pricing. Whilst the book draws its primary motivation from financial series and from multivariate econometric modelling, the applications are potentially much broader.
Why study the theory of experiment design? Although it can be useful to know about special designs for specific purposes, experience suggests that a particular design can rarely be used directly. It needs adaptation to accommodate the circumstances of the experiment. Successful designs depend upon adapting general theoretical principles to the special constraints of individual applications.
The analysis, prediction and interpolation of economic and other time series has a long history and many applications. Major new developments are taking place, driven partly by the need to analyze financial data. The five papers in this book describe those new developments from various viewpoints and are intended to be an introduction accessible to readers from a range of backgrounds. The book arises out of the second Seminaire European de Statistique (SEMSTAT) held in Oxford in December 1994. This brought together young statisticians from across Europe, and a series of introductory lectures were given on topics at the forefront of current research activity. The lectures form the basis for the five papers contained in the book. The papers by Shephard and Johansen deal respectively with time series models for volatility, i.e. variance heterogeneity, and with cointegration. Clements and Hendry analyze the nature of prediction errors. A complementary review paper by Laird gives a biometrical view of the analysis of short time series. Finally Astrup and Nielsen give a mathematical introduction to the study of option pricing. Whilst the book draws its primary motivation from financial series and from multivariate econometric modelling, the applications are potentially much broader.
Large observational studies involving research questions that require the measurement of several features on each individual arise in many fields including the social and medical sciences. This book sets out both the general concepts and the more technical statistical issues involved in analysis and interpretation. Numerous illustrative examples are described in outline and four studies are discussed in some detail.
Likelihood and its many associated concepts are of central importance in statistical theory and applications. The theory of likelihood and of likelihood-like objects (pseudo-likelihoods) has undergone extensive and important developments over the past 10 to 15 years, in particular as regards higher order asymptotics. This book provides an account of this field, which is still vigorously expanding. Conditioning and ancillarity underlie the p*-formula, a key formula for the conditional density of the maximum likelihood estimator, given an ancillary statistic. Various types of pseudo-likelihood are discussed, including profile and partial likelihoods. Special emphasis is given to modified profile likelihood and modified directed likelihood, and their intimate connection with the p*-formula. Among the other concepts and tools employed are sufficiency, parameter orthogonality, invariance, stochastic expansions and saddlepoint approximations. Brief reviews are given of the most important properties of exponential and transformation models and these types of model are used as test-beds for the general asymptotic theory. A final chapter briefly discusses a number of more general issues, including prediction and randomization theory. The emphasis is on ideas and methods, and detailed mathematical developments are largely omitted. There are numerous notes and exercises, many indicating substantial further results.
This book should be of interest to undergraduate and postgraduate students of probability theory.
This book provides an introductory account of the mathematical analysis of stochastic processes. It is helpful for statisticians and applied mathematicians interested in methods for solving particular problems, rather than for pure mathematicians interested in general theorems.
Statistics is a subject with a vast field of application, involving problems which vary widely in their character and complexity.However, in tackling these, we use a relatively small core of central ideas and methods. This book attempts to concentrateattention on these ideas: they are placed in a general settingand illustrated by relatively simple examples, avoidingwherever possible the extraneous difficulties of complicatedmathematical manipulation.In order to compress the central body of ideas into a smallvolume, it is necessary to assume a fair degree of mathematicalsophistication on the part of the reader, and the book is intendedfor students of mathematics who are already accustomed tothinking in rather general terms about spaces and functions
In this definitive book, D. R. Cox gives a comprehensive and balanced appraisal of statistical inference. He develops the key concepts, describing and comparing the main ideas and controversies over foundational issues that have been keenly argued for more than two-hundred years. Continuing a sixty-year career of major contributions to statistical thought, no one is better placed to give this much-needed account of the field. An appendix gives a more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The mathematics is kept as elementary as feasible, though previous knowledge of statistics is assumed. The book will be valued by every user or student of statistics who is serious about understanding the uncertainty inherent in conclusions from statistical analyses.
A text that stresses the general concepts of the theory of statistics Theoretical Statistics provides a systematic statement of the theory of statistics, emphasizing general concepts rather than mathematical rigor. Chapters 1 through 3 provide an overview of statistics and discuss some of the basic philosophical ideas and problems behind statistical procedures. Chapters 4 and 5 cover hypothesis testing with simple and null hypotheses, respectively. Subsequent chapters discuss non-parametrics, interval estimation, point estimation, asymptotics, Bayesian procedure, and deviation theory. Student familiarity with standard statistical techniques is assumed.
Offers a comprehensive nonmathematical treatment regarding the design and analysis of experiments, focusing on basic concepts rather than calculation of technical details. Much of the discussion is in terms of examples drawn from numerous fields of applications. Subjects include the justification and practical difficulties of randomization, various factors occurring in factorial experiments, selecting the size of an experiments, different purposes for which observations may be made and much more.
Applied statistics is more than data analysis, but it is easy to lose sight of the big picture. David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation. As you advance from research or policy question, to study design, through modelling and interpretation, and finally to meaningful conclusions, this book will be a valuable guide. Over a hundred illustrations from a wide variety of real applications make the conceptual points concrete, illuminating your path and deepening your understanding. This book is essential reading for anyone who makes extensive use of statistical methods in their work.
This book outlines some of the general ideas involved in applying statistical methods. It discusses some special problems, to illustrate both the general principles and important specific techniques of analysis. The book is intended for students interested in statistical methods.
In this definitive book, D. R. Cox gives a comprehensive and balanced appraisal of statistical inference. He develops the key concepts, describing and comparing the main ideas and controversies over foundational issues that have been keenly argued for more than two-hundred years. Continuing a sixty-year career of major contributions to statistical thought, no one is better placed to give this much-needed account of the field. An appendix gives a more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The mathematics is kept as elementary as feasible, though previous knowledge of statistics is assumed. The book will be valued by every user or student of statistics who is serious about understanding the uncertainty inherent in conclusions from statistical analyses. |
![]() ![]() You may like...
Ethics at the Heart of Higher Education
C R Crespo, Rita Kirk
Hardcover
The Guenons: Diversity and Adaptation in…
Mary E. Glenn, Marina Cords
Hardcover
R5,950
Discovery Miles 59 500
|