|
Showing 1 - 4 of
4 matches in All Departments
This lively book lays out a methodology of confidence distributions
and puts them through their paces. Among other merits, they lead to
optimal combinations of confidence from different sources of
information, and they can make complex models amenable to objective
and indeed prior-free analysis for less subjectively inclined
statisticians. The generous mixture of theory, illustrations,
applications and exercises is suitable for statisticians at all
levels of experience, as well as for data-oriented scientists. Some
confidence distributions are less dispersed than their competitors.
This concept leads to a theory of risk functions and comparisons
for distributions of confidence. Neyman-Pearson type theorems
leading to optimal confidence are developed and richly illustrated.
Exact and optimal confidence distribution is the gold standard for
inferred epistemic distributions. Confidence distributions and
likelihood functions are intertwined, allowing prior distributions
to be made part of the likelihood. Meta-analysis in likelihood
terms is developed and taken beyond traditional methods, suiting it
in particular to combining information across diverse data sources.
Given a data set, you can fit thousands of models at the push of a
button, but how do you choose the best? With so many candidate
models, overfitting is a real danger. Is the monkey who typed
Hamlet actually a good writer? Choosing a model is central to all
statistical work with data. We have seen rapid advances in model
fitting and in the theoretical understanding of model selection,
yet this book is the first to synthesize research and practice from
this active field. Model choice criteria are explained, discussed
and compared, including the AIC, BIC, DIC and FIC. The
uncertainties involved with model selection are tackled, with
discussions of frequentist and Bayesian methods; model averaging
schemes are presented. Real-data examples are complemented by
derivations providing deeper insight into the methodology, and
instructive exercises build familiarity with the methods. The
companion website features Data sets and R code.
Highly Structured Stochastic Systems (HSSS) is a modern strategy for building statistical models for challenging real-world problems, for computing with them, and for interpreting the resulting inference. The aim of this book is to make recent developments in HSSS accessible to a general statistical audience including graduate students and researchers.
Bayesian nonparametrics works - theoretically, computationally. The
theory provides highly flexible models whose complexity grows
appropriately with the amount of data. Computational issues, though
challenging, are no longer intractable. All that is needed is an
entry point: this intelligent book is the perfect guide to what can
seem a forbidding landscape. Tutorial chapters by Ghosal, Lijoi and
Prunster, Teh and Jordan, and Dunson advance from theory, to basic
models and hierarchical modeling, to applications and
implementation, particularly in computer science and biostatistics.
These are complemented by companion chapters by the editors and
Griffin and Quintana, providing additional models, examining
computational issues, identifying future growth areas, and giving
links to related topics. This coherent text gives ready access both
to underlying principles and to state-of-the-art practice. Specific
examples are drawn from information retrieval, NLP, machine vision,
computational biology, biostatistics, and bioinformatics.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
|