![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
Showing 1 - 4 of 4 matches in All Departments
This monograph brings together my work in mathematical statistics as I have viewed it through the lens of Jordan algebras. Three technical domains are to be seen: applications to random quadratic forms (sums of squares), the investigation of algebraic simplifications of maxi mum likelihood estimation of patterned covariance matrices, and a more wide open mathematical exploration of the algebraic arena from which I have drawn the results used in the statistical problems just mentioned. Chapters 1, 2, and 4 present the statistical outcomes I have developed using the algebraic results that appear, for the most part, in Chapter 3. As a less daunting, yet quite efficient, point of entry into this material, one avoiding most of the abstract algebraic issues, the reader may use the first half of Chapter 4. Here I present a streamlined, but still fully rigorous, definition of a Jordan algebra (as it is used in that chapter) and its essential properties. These facts are then immediately applied to simplifying the M: -step of the EM algorithm for multivariate normal covariance matrix estimation, in the presence of linear constraints, and data missing completely at random. The results presented essentially resolve a practical statistical quest begun by Rubin and Szatrowski 1982], and continued, sometimes implicitly, by many others. After this, one could then return to Chapters 1 and 2 to see how I have attempted to generalize the work of Cochran, Rao, Mitra, and others, on important and useful properties of sums of squares."
The clearest way into the Universe is through a forest wilderness. John MuIr As recently as 1970 the problem of obtaining optimal estimates for variance components in a mixed linear model with unbalanced data was considered a miasma of competing, generally weakly motivated estimators, with few firm gUidelines and many simple, compelling but Unanswered questions. Then in 1971 two significant beachheads were secured: the results of Rao [1971a, 1971b] and his MINQUE estimators, and related to these but not originally derived from them, the results of Seely [1971] obtained as part of his introduction of the no~ion of quad- ratic subspace into the literature of variance component estimation. These two approaches were ultimately shown to be intimately related by Pukelsheim [1976], who used a linear model for the com- ponents given by Mitra [1970], and in so doing, provided a mathemati- cal framework for estimation which permitted the immediate applica- tion of many of the familiar Gauss-Markov results, methods which had earlier been so successful in the estimation of the parameters in a linear model with only fixed effects. Moreover, this usually enor- mous linear model for the components can be displayed as the starting point for many of the popular variance component estimation tech- niques, thereby unifying the subject in addition to generating answers.
This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests (TM), neural nets, support vector machines, nearest neighbors and boosting.
This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests (TM), neural nets, support vector machines, nearest neighbors and boosting.
|
You may like...
New Trends in Corpora and Language…
Ana Frankenberg-Garcia, Guy Aston, …
Hardcover
R5,286
Discovery Miles 52 860
Merging Features - Computation…
Jose M. Brucart, Anna Gavarro, …
Hardcover
The Language of Early Childhood - Volume…
Jonathan J. Webster
Hardcover
R5,945
Discovery Miles 59 450
Analogy in Grammar - Form and…
James P. Blevins, Juliette Blevins
Hardcover
R3,534
Discovery Miles 35 340
|