|
Showing 1 - 1 of
1 matches in All Departments
During the last decades. the evolution of theoretical statistics
has been marked by a considerable expansion of the number of
mathematically and computationaly trac table models. Faced with
this inflation. applied statisticians feel more and more un
comfortable: they are often hesitant about their traditional
(typically parametric) assumptions. such as normal and i. i. d . *
ARMA forms for time-series. etc . * but are at the same time afraid
of venturing into the jungle of less familiar models. The prob lem
of the justification for taking up one model rather than another
one is thus a crucial one. and can take different forms. (a)
~~~GBPifi~~~iQ~ : Do observations suggest the use of a different
model from the one initially proposed (e. g. one which takes
account of outliers). or do they render plau sible a choice from
among different proposed models (e. g. fixing or not the value of a
certai n parameter) ? (b) tlQ~~L~~l!rQ1!iIMHQ~ : How is it possible
to compute a "distance" between a given model and a less (or more)
sophisticated one. and what is the technical meaning of such a
"distance" ? (c) BQe~~~~~~ : To what extent do the qualities of a
procedure. well adapted to a "small" model. deteriorate when this
model is replaced by a more general one? This question can be
considered not only. as usual. in a parametric framework (contamina
tion) or in the extension from parametriC to non parametric models
but also.
|
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.