Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 3 of 3 matches in All Departments
This book offers a detailed application guide to XploRe - an interactive statistical computing environment. As a guide it contains case studies of real data analysis situations. It helps the beginner in statistical data analysis to learn how XploRe works in real life applications. Many examples from practice are discussed and analysed in full length. Great emphasis is put on a graphic based understanding of the data interrelations. The case studies include: Survival modelling with Cox's proportional hazard regression, Vitamin C data analysis with Quantile Regression, and many others.
It is generally accepted that training in statistics must include some exposure to the mechanics of computational statistics. This learning guide is intended for beginners in computer-aided statistical data analysis. The prerequisites for XploRe - the statistical computing environment - are an introductory course in statistics or mathematics. The reader of this book should be familiar with basic elements of matrix algebra and the use of HTML browsers. This guide is designed to help students to XploRe their data, to learn (via data interaction) about statistical methods and to disseminate their findings via the HTML outlet. The XploRe APSS (Auto Pilot Support System) is a powerful tool for finding the appropriate statistical technique (quantlet) for the data under analysis. Homogeneous quantlets are combined in XploRe into quantlibs. The XploRe language is intuitive and users with prior experience of other sta tistical programs will find it easy to reproduce the examples explained in this guide. The quantlets in this guide are available on the CD-ROM as well as on the Internet. The statistical operations that the student is guided into range from basic one-dimensional data analysis to more complicated tasks such as time series analysis, multivariate graphics construction, microeconometrics, panel data analysis, etc. The guide starts with a simple data analysis of pullover sales data, then in troduces graphics. The graphics are interactive and cover a wide range of dis plays of statistical data."
Classical time series methods are based on the assumption that a particular stochastic process model generates the observed data. The, most commonly used assumption is that the data is a realization of a stationary Gaussian process. However, since the Gaussian assumption is a fairly stringent one, this assumption is frequently replaced by the weaker assumption that the process is wide sense stationary and that only the mean and covariance sequence is specified. This approach of specifying the probabilistic behavior only up to "second order" has of course been extremely popular from a theoretical point of view be cause it has allowed one to treat a large variety of problems, such as prediction, filtering and smoothing, using the geometry of Hilbert spaces. While the literature abounds with a variety of optimal estimation results based on either the Gaussian assumption or the specification of second-order properties, time series workers have not always believed in the literal truth of either the Gaussian or second-order specifica tion. They have none-the-less stressed the importance of such optimali ty results, probably for two main reasons: First, the results come from a rich and very workable theory. Second, the researchers often relied on a vague belief in a kind of continuity principle according to which the results of time series inference would change only a small amount if the actual model deviated only a small amount from the assum ed model."
|
You may like...
A Shakespeare Story: Shakespeare Stories…
Andrew Matthews
Paperback
|