0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (3)
  • R2,500 - R5,000 (1)
  • -
Status
Brand

Showing 1 - 4 of 4 matches in All Departments

Statistical Applications of Jordan Algebras (Paperback, Softcover reprint of the original 1st ed. 1994): James D Malley Statistical Applications of Jordan Algebras (Paperback, Softcover reprint of the original 1st ed. 1994)
James D Malley
R1,469 Discovery Miles 14 690 Ships in 10 - 15 working days

This monograph brings together my work in mathematical statistics as I have viewed it through the lens of Jordan algebras. Three technical domains are to be seen: applications to random quadratic forms (sums of squares), the investigation of algebraic simplifications of maxi mum likelihood estimation of patterned covariance matrices, and a more wide open mathematical exploration of the algebraic arena from which I have drawn the results used in the statistical problems just mentioned. Chapters 1, 2, and 4 present the statistical outcomes I have developed using the algebraic results that appear, for the most part, in Chapter 3. As a less daunting, yet quite efficient, point of entry into this material, one avoiding most of the abstract algebraic issues, the reader may use the first half of Chapter 4. Here I present a streamlined, but still fully rigorous, definition of a Jordan algebra (as it is used in that chapter) and its essential properties. These facts are then immediately applied to simplifying the M: -step of the EM algorithm for multivariate normal covariance matrix estimation, in the presence of linear constraints, and data missing completely at random. The results presented essentially resolve a practical statistical quest begun by Rubin and Szatrowski 1982], and continued, sometimes implicitly, by many others. After this, one could then return to Chapters 1 and 2 to see how I have attempted to generalize the work of Cochran, Rao, Mitra, and others, on important and useful properties of sums of squares."

Optimal Unbiased Estimation of Variance Components (Paperback, Softcover reprint of the original 1st ed. 1986): James D Malley Optimal Unbiased Estimation of Variance Components (Paperback, Softcover reprint of the original 1st ed. 1986)
James D Malley
R1,489 Discovery Miles 14 890 Ships in 10 - 15 working days

The clearest way into the Universe is through a forest wilderness. John MuIr As recently as 1970 the problem of obtaining optimal estimates for variance components in a mixed linear model with unbalanced data was considered a miasma of competing, generally weakly motivated estimators, with few firm gUidelines and many simple, compelling but Unanswered questions. Then in 1971 two significant beachheads were secured: the results of Rao [1971a, 1971b] and his MINQUE estimators, and related to these but not originally derived from them, the results of Seely [1971] obtained as part of his introduction of the no~ion of quad- ratic subspace into the literature of variance component estimation. These two approaches were ultimately shown to be intimately related by Pukelsheim [1976], who used a linear model for the com- ponents given by Mitra [1970], and in so doing, provided a mathemati- cal framework for estimation which permitted the immediate applica- tion of many of the familiar Gauss-Markov results, methods which had earlier been so successful in the estimation of the parameters in a linear model with only fixed effects. Moreover, this usually enor- mous linear model for the components can be displayed as the starting point for many of the popular variance component estimation tech- niques, thereby unifying the subject in addition to generating answers.

Statistical Learning for Biomedical Data (Paperback, New title): James D Malley, Karen G. Malley, Sinisa Pajevic Statistical Learning for Biomedical Data (Paperback, New title)
James D Malley, Karen G. Malley, Sinisa Pajevic
R1,217 Discovery Miles 12 170 Ships in 12 - 17 working days

This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests (TM), neural nets, support vector machines, nearest neighbors and boosting.

Statistical Learning for Biomedical Data (Hardcover, New title): James D Malley, Karen G. Malley, Sinisa Pajevic Statistical Learning for Biomedical Data (Hardcover, New title)
James D Malley, Karen G. Malley, Sinisa Pajevic
R3,478 Discovery Miles 34 780 Ships in 10 - 15 working days

This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests (TM), neural nets, support vector machines, nearest neighbors and boosting.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
JCB Hiker HRO Composite Toe Safety Boot…
R1,809 Discovery Miles 18 090
Baby Dove Body Wash 200ml
R50 Discovery Miles 500
Sony NEW Playstation Dualshock 4 v2…
 (3)
R1,804 R1,460 Discovery Miles 14 600
Russell Hobbs Toaster (2 Slice…
R727 Discovery Miles 7 270
Loot
Nadine Gordimer Paperback  (2)
R375 R347 Discovery Miles 3 470
Best Buds Dog Shampoo Powder Fresh…
R184 R138 Discovery Miles 1 380
LG 20MK400H 19.5" WXGA LED Monitor…
R2,199 R1,549 Discovery Miles 15 490
Art Creation Expressions Acrylic…
R1,129 Discovery Miles 11 290
Loot
Nadine Gordimer Paperback  (2)
R375 R347 Discovery Miles 3 470
Die Wonder Van Die Skepping - Nog 100…
Louie Giglio Hardcover R279 R257 Discovery Miles 2 570

 

Partners