|
Showing 1 - 3 of
3 matches in All Departments
A unique and comprehensive text on the philosophy of model-based
data analysis and strategy for the analysis of empirical data. The
book introduces information theoretic approaches and focuses
critical attention on a priori modeling and the selection of a good
approximating model that best represents the inference supported by
the data. It contains several new approaches to estimating model
selection uncertainty and incorporating selection uncertainty into
estimates of precision. An array of examples is given to illustrate
various technical issues. The text has been written for biologists
and statisticians using models for making inferences from empirical
data.
The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference). A philosophy is presented for model-based data analysis and a general strategy outlined for the analysis of empirical data. The book invites increased attention on a priori science hypotheses and modeling. Kullback-Leibler Information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection. The maximized log-likelihood function can be bias-corrected as an estimator of expected, relative Kullback-Leibler information. This leads to Akaike's Information Criterion (AIC) and various extensions. These methods are relatively simple and easy to use in practice, but based on deep statistical theory. The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are objective and practical to employ across a very wide class of empirical problems. The book presents several new ways to incorporate model selection uncertainty into parameter estimates and estimates of precision. An array of challenging examples is given to illustrate various technical issues. This is an applied book written primarily for biologists and statisticians wanting to make inferences from multiple models and is suitable as a graduate text or as a reference for professional analysts.
This study concerns the use of distance sampling to estimate the
density or abundance of biological populations. Line and point
transect sampling are the primary distance methods. Here, lines or
points are surveyed in the field and the observer records a
distance to those objects of interest that are detected. The sample
data are the set of distances of detected objects and any relevant
covariates; however, many objects may remain undetected during the
course of the survey. Distance sampling provides a way to obtain
reliable estimates of density of objects under fairly mild
assumptions. Distance sampling is an extension of plot sampling
methods where it is assumed that all objects within sample plots
are counted. The objective of this book is to provide a
comprehensive treatment of distance sampling theory and
application. It covers the theory and application of distance
sampling with emphasis on line and point transects. Specialized
applications are noted briefly, such as trapping webs and cue
counts. General considerations are given to the design of distance
sampling surveys.
|
You may like...
Ab Wheel
R209
R149
Discovery Miles 1 490
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.