0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (3)
  • R2,500 - R5,000 (1)
  • -
Status
Brand

Showing 1 - 4 of 4 matches in All Departments

Statistical Applications of Jordan Algebras (Paperback, Softcover reprint of the original 1st ed. 1994): James D Malley Statistical Applications of Jordan Algebras (Paperback, Softcover reprint of the original 1st ed. 1994)
James D Malley
R1,360 Discovery Miles 13 600 Ships in 18 - 22 working days

This monograph brings together my work in mathematical statistics as I have viewed it through the lens of Jordan algebras. Three technical domains are to be seen: applications to random quadratic forms (sums of squares), the investigation of algebraic simplifications of maxi mum likelihood estimation of patterned covariance matrices, and a more wide open mathematical exploration of the algebraic arena from which I have drawn the results used in the statistical problems just mentioned. Chapters 1, 2, and 4 present the statistical outcomes I have developed using the algebraic results that appear, for the most part, in Chapter 3. As a less daunting, yet quite efficient, point of entry into this material, one avoiding most of the abstract algebraic issues, the reader may use the first half of Chapter 4. Here I present a streamlined, but still fully rigorous, definition of a Jordan algebra (as it is used in that chapter) and its essential properties. These facts are then immediately applied to simplifying the M: -step of the EM algorithm for multivariate normal covariance matrix estimation, in the presence of linear constraints, and data missing completely at random. The results presented essentially resolve a practical statistical quest begun by Rubin and Szatrowski 1982], and continued, sometimes implicitly, by many others. After this, one could then return to Chapters 1 and 2 to see how I have attempted to generalize the work of Cochran, Rao, Mitra, and others, on important and useful properties of sums of squares."

Optimal Unbiased Estimation of Variance Components (Paperback, Softcover reprint of the original 1st ed. 1986): James D Malley Optimal Unbiased Estimation of Variance Components (Paperback, Softcover reprint of the original 1st ed. 1986)
James D Malley
R1,378 Discovery Miles 13 780 Ships in 18 - 22 working days

The clearest way into the Universe is through a forest wilderness. John MuIr As recently as 1970 the problem of obtaining optimal estimates for variance components in a mixed linear model with unbalanced data was considered a miasma of competing, generally weakly motivated estimators, with few firm gUidelines and many simple, compelling but Unanswered questions. Then in 1971 two significant beachheads were secured: the results of Rao [1971a, 1971b] and his MINQUE estimators, and related to these but not originally derived from them, the results of Seely [1971] obtained as part of his introduction of the no~ion of quad- ratic subspace into the literature of variance component estimation. These two approaches were ultimately shown to be intimately related by Pukelsheim [1976], who used a linear model for the com- ponents given by Mitra [1970], and in so doing, provided a mathemati- cal framework for estimation which permitted the immediate applica- tion of many of the familiar Gauss-Markov results, methods which had earlier been so successful in the estimation of the parameters in a linear model with only fixed effects. Moreover, this usually enor- mous linear model for the components can be displayed as the starting point for many of the popular variance component estimation tech- niques, thereby unifying the subject in addition to generating answers.

Statistical Learning for Biomedical Data (Hardcover, New title): James D Malley, Karen G. Malley, Sinisa Pajevic Statistical Learning for Biomedical Data (Hardcover, New title)
James D Malley, Karen G. Malley, Sinisa Pajevic
R3,215 Discovery Miles 32 150 Ships in 10 - 15 working days

This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests (TM), neural nets, support vector machines, nearest neighbors and boosting.

Statistical Learning for Biomedical Data (Paperback, New title): James D Malley, Karen G. Malley, Sinisa Pajevic Statistical Learning for Biomedical Data (Paperback, New title)
James D Malley, Karen G. Malley, Sinisa Pajevic
R1,248 Discovery Miles 12 480 Ships in 10 - 15 working days

This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests (TM), neural nets, support vector machines, nearest neighbors and boosting.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Gotcha Digital-Midsize 30 M-WR Ladies…
R250 R216 Discovery Miles 2 160
PostUCare™ 3-in-1 Ergonomic & Posture…
 (1)
R2,599 R2,099 Discovery Miles 20 990
Xbox One Replacement Case
 (8)
R79 Discovery Miles 790
Russell Hobbs Toaster (2 Slice…
R727 Discovery Miles 7 270
Loot
Nadine Gordimer Paperback  (2)
R367 R340 Discovery Miles 3 400
LocknLock Pet Food Container (560+310ml)
R115 Discovery Miles 1 150
Asus ZenScreen MB16ACV 15.6" FHD IPS…
R5,999 R5,399 Discovery Miles 53 990
Parrot Markerboard Writing Slate Starter…
R256 R199 Discovery Miles 1 990
Loot
Nadine Gordimer Paperback  (2)
R367 R340 Discovery Miles 3 400
Die Wonder Van Die Skepping - Nog 100…
Louie Giglio Hardcover R279 R257 Discovery Miles 2 570

 

Partners