Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 12 of 12 matches in All Departments
This book introduces several topics related to linear model theory: multivariate linear models, discriminant analysis, principal components, factor analysis, time series in both the frequency and time domains, and spatial data analysis. The second edition adds new material on nonparametric regression, response surface maximization, and longitudinal models. The book provides a unified approach to these disparate subject and serves as a self-contained companion volume to the author's Plane Answers to Complex Questions: The Theory of Linear Models. Ronald Christensen is Professor of Statistics at the University of New Mexico. He is well known for his work on the theory and application of linear models having linear structure. He is the author of numerous technical articles and several books and he is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics. Also Available: Christensen, Ronald. Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition (1996). New York: Springer-Verlag New York, Inc. Christensen, Ronald. Log-Linear Models and Logistic Regression, Second Edition (1997). New York: Springer-Verlag New York, Inc.
The primary focus here is on log-linear models for contingency tables, but in this second edition, greater emphasis has been placed on logistic regression. The book explores topics such as logistic discrimination and generalised linear models, and builds upon the relationships between these basic models for continuous data and the analogous log-linear and logistic regression models for discrete data. It also carefully examines the differences in model interpretations and evaluations that occur due to the discrete nature of the data. Sample commands are given for analyses in SAS, BMFP, and GLIM, while numerous data sets from fields as diverse as engineering, education, sociology, and medicine are used to illustrate procedures and provide exercises. Throughoutthe book, the treatment is designed for students with prior knowledge of analysis of variance and regression.
This textbook provides a wide-ranging introduction to the use and theory of linear models for analyzing data. The author's emphasis is on providing a unified treatment of linear models, including analysis of variance models and regression models, based on projections, orthogonality, and other vector space ideas. Every chapter comes with numerous exercises and examples that make it ideal for a graduate-level course. All of the standard topics are covered in depth: estimation including biased and Bayesian estimation, significance testing, ANOVA, multiple comparisons, regression analysis, and experimental design models. In addition, the book covers topics that are not usually treated at this level, but which are important in their own right: best linear and best linear unbiased prediction, split plot models, balanced incomplete block designs, testing for lack of fit, testing for independence, models with singular covariance matrices, diagnostics, collinearity, and variable selection. This new edition includes new sections on alternatives to least squares estimation and the variance-bias tradeoff, expanded discussion of variable selection, new material on characterizing the interaction space in an unbalanced two-way ANOVA, Freedman's critique of the sandwich estimator, and much more.
Analysis of Variance, Design, and Regression: Linear Modeling for Unbalanced Data, Second Edition presents linear structures for modeling data with an emphasis on how to incorporate specific ideas (hypotheses) about the structure of the data into a linear model for the data. The book carefully analyzes small data sets by using tools that are easily scaled to big data. The tools also apply to small relevant data sets that are extracted from big data. New to the Second Edition Reorganized to focus on unbalanced data Reworked balanced analyses using methods for unbalanced data Introductions to nonparametric and lasso regression Introductions to general additive and generalized additive models Examination of homologous factors Unbalanced split plot analyses Extensions to generalized linear models R, Minitab (R), and SAS code on the author's website The text can be used in a variety of courses, including a yearlong graduate course on regression and ANOVA or a data analysis course for upper-division statistics students and graduate students from other fields. It places a strong emphasis on interpreting the range of computer output encountered when dealing with unbalanced data.
This textbook provides a wide-ranging introduction to the use and theory of linear models for analyzing data. The author's emphasis is on providing a unified treatment of linear models, including analysis of variance models and regression models, based on projections, orthogonality, and other vector space ideas. Every chapter comes with numerous exercises and examples that make it ideal for a graduate-level course. All of the standard topics are covered in depth: estimation including biased and Bayesian estimation, significance testing, ANOVA, multiple comparisons, regression analysis, and experimental design models. In addition, the book covers topics that are not usually treated at this level, but which are important in their own right: best linear and best linear unbiased prediction, split plot models, balanced incomplete block designs, testing for lack of fit, testing for independence, models with singular covariance matrices, diagnostics, collinearity, and variable selection. This new edition includes new sections on alternatives to least squares estimation and the variance-bias tradeoff, expanded discussion of variable selection, new material on characterizing the interaction space in an unbalanced two-way ANOVA, Freedman's critique of the sandwich estimator, and much more.
Analysis of Variance, Design, and Regression: Linear Modeling for Unbalanced Data, Second Edition presents linear structures for modeling data with an emphasis on how to incorporate specific ideas (hypotheses) about the structure of the data into a linear model for the data. The book carefully analyzes small data sets by using tools that are easily scaled to big data. The tools also apply to small relevant data sets that are extracted from big data. New to the Second Edition Reorganized to focus on unbalanced data Reworked balanced analyses using methods for unbalanced data Introductions to nonparametric and lasso regression Introductions to general additive and generalized additive models Examination of homologous factors Unbalanced split plot analyses Extensions to generalized linear models R, Minitab (R), and SAS code on the author's website The text can be used in a variety of courses, including a yearlong graduate course on regression and ANOVA or a data analysis course for upper-division statistics students and graduate students from other fields. It places a strong emphasis on interpreting the range of computer output encountered when dealing with unbalanced data.
This book introduces several topics related to linear model theory, including: multivariate linear models, discriminant analysis, principal components, factor analysis, time series in both the frequency and time domains, and spatial data analysis. This second edition adds new material on nonparametric regression, response surface maximization, and longitudinal models. The book provides a unified approach to these disparate subjects and serves as a self-contained companion volume to the author's Plane Answers to Complex Questions: The Theory of Linear Models. Ronald Christensen is Professor of Statistics at the University of New Mexico. He is well known for his work on the theory and application of linear models having linear structure.
The primary focus here is on log-linear models for contingency tables, but in this second edition, greater emphasis has been placed on logistic regression. The book explores topics such as logistic discrimination and generalised linear models, and builds upon the relationships between these basic models for continuous data and the analogous log-linear and logistic regression models for discrete data. It also carefully examines the differences in model interpretations and evaluations that occur due to the discrete nature of the data. Sample commands are given for analyses in SAS, BMFP, and GLIM, while numerous data sets from fields as diverse as engineering, education, sociology, and medicine are used to illustrate procedures and provide exercises. Throughoutthe book, the treatment is designed for students with prior knowledge of analysis of variance and regression.
This textbook provides a wide-ranging introduction to the use and theory of linear models for analyzing data. The author's emphasis is on providing a unified treatment of linear models, including analysis of variance models and regression models, based on projections, orthogonality, and other vector space ideas. Every chapter comes with numerous exercises and examples that make it ideal for a graduate-level course. All of the standard topics are covered in depth: ANOVA, estimation including Bayesian estimation, hypothesis testing, multiple comparisons, regression analysis, and experimental design models. In addition, the book covers topics that are not usually treated at this level, but which are important in their own right: balanced incomplete block designs, testing for lack of fit, testing for independence, models with singular covariance matrices, variance component estimation, best linear and best linear unbiased prediction, collinearity, and variable selection. This new edition includes a more extensive discussion of best prediction and associated ideas of R2, as well as new sections on inner products and perpendicular projections for more general spaces and Milliken and Graybill's generalization of Tukey's one degree of freedom for nonadditivity test.
This textbook provides a wide-ranging introduction to the use and theory of linear models for analyzing data. The author's emphasis is on providing a unified treatment of linear models, including analysis of variance models and regression models, based on projections, orthogonality, and other vector space ideas. Every chapter comes with numerous exercises and examples that make it ideal for a graduate-level course. All of the standard topics are covered in depth: ANOVA, estimation including Bayesian estimation, hypothesis testing, multiple comparisons, regression analysis, and experimental design models. In addition, the book covers topics that are not usually treated at this level, but which are important in their own right: balanced incomplete block designs, testing for lack of fit, testing for independence, models with singular covariance matrices, variance component estimation, best linear and best linear unbiased prediction, collinearity, and variable selection. This new edition includes discussion of identifiability and its relationship to estimability, different approaches to the theories of testing parametric hypotheses and analysis of covariance, additional discussion of the geometry of least squares estimation and testing, new discussion of models for experiments with factorial treatment structures, and a new appendix on possible causes for getting test statistics that are so small as to be suspicious. Ronald Christensen is a Professor of Statistics at the University of New Mexico. He is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics.
Emphasizing the use of WinBUGS and R to analyze real data, Bayesian Ideas and Data Analysis An Introduction for Scientists and Statisticians presents statistical tools to address scientific questions. It highlights foundational issues in statistics, the importance of making accurate predictions, and the need for scientists and statisticians to collaborate in analyzing data. The WinBUGS code provided offers a convenient platform to model and analyze a wide range of data. The first five chapters of the book contain core material that spans basic Bayesian ideas, calculations, and inference, including modeling one and two sample data from traditional sampling models. The text then covers Monte Carlo methods, such as Markov chain Monte Carlo (MCMC) simulation. After discussing linear structures in regression, it presents binomial regression, normal regression, analysis of variance, and Poisson regression, before extending these methods to handle correlated data. The authors also examine survival analysis and binary diagnostic testing. A complementary chapter on diagnostic testing for continuous outcomes is available on the book s website. The last chapter on nonparametric inference explores density estimation and flexible regression modeling of mean functions. The appropriate statistical analysis of data involves a collaborative effort between scientists and statisticians. Exemplifying this approach, Bayesian Ideas and Data Analysis focuses on the necessary tools and concepts for modeling and analyzing scientific data. Data sets and codes are provided on a supplemental website."
This book introduces several topics related to linear model theory, including: multivariate linear models, discriminant analysis, principal components, factor analysis, time series in both the frequency and time domains, and spatial data analysis. This second edition adds new material on nonparametric regression, response surface maximization, and longitudinal models. The book provides a unified approach to these disparate subjects and serves as a self-contained companion volume to the author's Plane Answers to Complex Questions: The Theory of Linear Models. Ronald Christensen is Professor of Statistics at the University of New Mexico. He is well known for his work on the theory and application of linear models having linear structure.
|
You may like...
|