Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 3 of 3 matches in All Departments
The Role of the Computer in Statistics David Cox Nuffield College, Oxford OXIINF, U.K. A classification of statistical problems via their computational demands hinges on four components (I) the amount and complexity of the data, (il) the specificity of the objectives of the analysis, (iii) the broad aspects of the approach to analysis, (ill) the conceptual, mathematical and numerical analytic complexity of the methods. Computational requi rements may be limiting in (I) and (ill), either through the need for special programming effort, or because of the difficulties of initial data management or because of the load of detailed analysis. The implications of modern computational developments for statistical work can be illustrated in the context of the study of specific probabilistic models, the development of general statistical theory, the design of investigations and the analysis of empirical data. While simulation is usually likely to be the most sensible way of investigating specific complex stochastic models, computerized algebra has an obvious role in the more analyti cal work. It seems likely that statistics and applied probability have made insufficient use of developments in numerical analysis associated more with classical applied mathematics, in particular in the solution of large systems of ordinary and partial differential equations, integral equations and integra-differential equations and for the centsraction of "useful" in formation from integral transforms. Increasing emphasis on models incorporating specific subject-matter considerations is one route to bridging the gap between statistical ana."
The papers assembled in this book were presented at the biannual symposium of Inter national Association for Statistical Computing in Neuchcitel, Switzerland, in August of 1992. This congress marked the tenth such meeting from its inception in 1974 at Vienna and maintained the tradition of providing a forum for the open discussion of progress made in computer oriented statistics and the dissemination of new ideas throughout the statistical community. It was gratifying to see how well the groups of theoretical statisti cians, software developers and applied research workers were represented, whose mixing is an event made uniquely possible by this symposium. While maintaining traditions certain new features have been introduced at this con ference: there were a larger number of invited speakers; there was more commercial sponsorship and exhibition space; and a larger body of proceedings have been published. The structure of the proceedings follows a standard format: the papers have been grouped together according to a rough subject matter classification, and within topic follow an approximate aphabetical order. The papers are published in two volumes ac cording to the emphasis of the topics: volume I gives a slight leaning towards statistics and modelling, while volume II is focussed more on computation; but this is certainly only a crude distinction and the volumes have to be thought of as the result of a single en terprise.
This volume consists of the published proceedings of the GLIM 95 Conference, held at Lancaster University, UK, from 16-19 September 1995. This is the second of such proceedings, the first of which was published as No 14 of the Springer-Verlag Lecture Notes in Statistics (Gilchrist, ed,1992). Since the 1992 conference there has been a modest update of the GLIM system, called GLIM 3.77. This incorporates some minor but pleasant enhancements and these are outlined in these proceedings by payne and Webb. With the completion of GLIM 3.77, future developments of the GLIM system are again under active review. Aitkin surveys possible directions for GLIM. one sOlMlWhat different avenue for analysing generalized linear models is provided by the GENSTAT system; Lane and payne discuss the new interactive facilities p ided by version 5 of GENSTAT. On the theory Side, NeIder extends the concept and use of quasi-likelihood, giving useful forms of variance function and a method of introducing a random element into the linear predictor. Longford discusses one approach to the analysis of clustered observations (subjects within groups). Green and Yandell introduce 'semi-parametric modelling', allowing a compromise between parametriC and non-parametriC modelling. They modify the linear predictor by the addition of a ( smooth) curve, and estimate parameters by maximising a penalised log-likelihood. Hastie and Tibshirani introduce generalized additive models, introducing a linear predictor of the form 11 = (X + Efj(xj), with the fj estimated from the data by a weighted average of neighbouring observations.
|
You may like...
|