|
Showing 1 - 16 of
16 matches in All Departments
Assume one has to estimate the mean J x P( dx) (or the median of P,
or any other functional t;;(P)) on the basis ofi.i.d. observations
from P. Ifnothing is known about P, then the sample mean is
certainly the best estimator one can think of. If P is known to be
the member of a certain parametric family, say {Po: {) E e}, one
can usually do better by estimating {) first, say by {)(n)(.~.),
and using J XPo(n)(;r.) (dx) as an estimate for J xPo(dx). There is
an "intermediate" range, where we know something about the unknown
probability measure P, but less than parametric theory takes for
granted. Practical problems have always led statisticians to invent
estimators for such intermediate models, but it usually remained
open whether these estimators are nearly optimal or not. There was
one exception: The case of "adaptivity", where a "nonparametric"
estimate exists which is asymptotically optimal for any parametric
submodel. The standard (and for a long time only) example of such a
fortunate situation was the estimation of the center of symmetry
for a distribution of unknown shape.
This book presents a detailed description of the development of
statistical theory. In the mid twentieth century, the development
of mathematical statistics underwent an enduring change, due to the
advent of more refined mathematical tools. New concepts like
sufficiency, superefficiency, adaptivity etc. motivated scholars to
reflect upon the interpretation of mathematical concepts in terms
of their real-world relevance. Questions concerning the optimality
of estimators, for instance, had remained unanswered for decades,
because a meaningful concept of optimality (based on the regularity
of the estimators, the representation of their limit distribution
and assertions about their concentration by means of Anderson's
Theorem) was not yet available. The rapidly developing asymptotic
theory provided approximate answers to questions for which
non-asymptotic theory had found no satisfying solutions. In four
engaging essays, this book presents a detailed description of how
the use of mathematical methods stimulated the development of a
statistical theory. Primarily focused on methodology, questionable
proofs and neglected questions of priority, the book offers an
intriguing resource for researchers in theoretical statistics, and
can also serve as a textbook for advanced courses in statisticc.
This book presents a detailed description of the development of
statistical theory. In the mid twentieth century, the development
of mathematical statistics underwent an enduring change, due to the
advent of more refined mathematical tools. New concepts like
sufficiency, superefficiency, adaptivity etc. motivated scholars to
reflect upon the interpretation of mathematical concepts in terms
of their real-world relevance. Questions concerning the optimality
of estimators, for instance, had remained unanswered for decades,
because a meaningful concept of optimality (based on the regularity
of the estimators, the representation of their limit distribution
and assertions about their concentration by means of Anderson's
Theorem) was not yet available. The rapidly developing asymptotic
theory provided approximate answers to questions for which
non-asymptotic theory had found no satisfying solutions. In four
engaging essays, this book presents a detailed description of how
the use of mathematical methods stimulated the development of a
statistical theory. Primarily focused on methodology, questionable
proofs and neglected questions of priority, the book offers an
intriguing resource for researchers in theoretical statistics, and
can also serve as a textbook for advanced courses in statisticc.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R205
R164
Discovery Miles 1 640
|