![]() |
![]() |
Your cart is empty |
||
Books > Social sciences > Psychology > Psychological methodology
In this edited book, expert assessors illustrate through case examples how they apply psychoanalytic theory to different clinical settings. These settings include private practice, neuropsychological, medical, forensic, personnel, custody, school, and psychiatric-residential. Psychoanalytic Assessment Applications for Different Settings allows the reader to track the assessor's work from start to finish. Each chapter presents a description of the clinical setting in which the assessment occurred; a detailed review of the referral and patient history; test selection and test findings with supporting data drawn from self-report, and cognitive and personality performance-based measures; psychiatric and psychodynamic diagnoses; implications and recommendations; discussion of the feedback process; and assessor-self reflections on the case. Throughout the book, psychodynamic concepts are used to help understand the test data. The authors are experts in the psychodynamic assessment of clients in private practice, educational, medical, neuropsychological, and forensic settings. The findings are derived from methods particular to each setting, with supporting data highlighted and woven throughout the interpretive process. Students, educators, practitioners, and the professionals who collaborate with assessors will benefit from this book's offerings.
In this edited book, expert assessors illustrate through case examples how they apply psychoanalytic theory to different clinical settings. These settings include private practice, neuropsychological, medical, forensic, personnel, custody, school, and psychiatric-residential. Psychoanalytic Assessment Applications for Different Settings allows the reader to track the assessor's work from start to finish. Each chapter presents a description of the clinical setting in which the assessment occurred; a detailed review of the referral and patient history; test selection and test findings with supporting data drawn from self-report, and cognitive and personality performance-based measures; psychiatric and psychodynamic diagnoses; implications and recommendations; discussion of the feedback process; and assessor-self reflections on the case. Throughout the book, psychodynamic concepts are used to help understand the test data. The authors are experts in the psychodynamic assessment of clients in private practice, educational, medical, neuropsychological, and forensic settings. The findings are derived from methods particular to each setting, with supporting data highlighted and woven throughout the interpretive process. Students, educators, practitioners, and the professionals who collaborate with assessors will benefit from this book's offerings.
Measurement Theory in Action, Third Edition, helps readers apply testing and measurement theories and features 22 self-contained modules which instructors can match to their courses. Each module features an overview of a measurement issue and a step-by-step application of that theory. Best Practices provide recommendations for ensuring the appropriate application of the theory. Practical Questions help students assess their understanding of the topic. Students can apply the material using real data in the Exercises, some of which require no computer access, while others involve the use of statistical software to solve the problem. Case Studies in each module depict typical dilemmas faced when applying measurement theory followed by Questions to Ponder to encourage critical examination of the issues noted in the cases. The book's website houses the data sets, additional exercises, PowerPoints, and more. Other features include suggested readings to further one's understanding of the topics, a glossary, and a comprehensive exercise in Appendix A that incorporates many of the steps in the development of a measure of typical performance. Updated throughout to reflect recent changes in the field, the new edition also features: Recent changes in understanding measurement, with over 50 new and updated references Explanations of why each chapter, article, or book in each module's Further Readings section is recommended Instructors will find suggested answers to the book's questions and exercises; detailed solutions to the exercises; test bank with 10 multiple choice and 5 short answer questions for each module; and PowerPoint slides. Students and instructors can access SPSS data sets; additional exercises; the glossary; and additional information helpful in understanding psychometric concepts. It is ideal as a text for any psychometrics or testing and measurement course taught in psychology, education, marketing, and management. It is also an invaluable reference for professional researchers in need of a quick refresher on applying measurement theory.
Measurement Theory in Action, Third Edition, helps readers apply testing and measurement theories and features 22 self-contained modules which instructors can match to their courses. Each module features an overview of a measurement issue and a step-by-step application of that theory. Best Practices provide recommendations for ensuring the appropriate application of the theory. Practical Questions help students assess their understanding of the topic. Students can apply the material using real data in the Exercises, some of which require no computer access, while others involve the use of statistical software to solve the problem. Case Studies in each module depict typical dilemmas faced when applying measurement theory followed by Questions to Ponder to encourage critical examination of the issues noted in the cases. The book's website houses the data sets, additional exercises, PowerPoints, and more. Other features include suggested readings to further one's understanding of the topics, a glossary, and a comprehensive exercise in Appendix A that incorporates many of the steps in the development of a measure of typical performance. Updated throughout to reflect recent changes in the field, the new edition also features: Recent changes in understanding measurement, with over 50 new and updated references Explanations of why each chapter, article, or book in each module's Further Readings section is recommended Instructors will find suggested answers to the book's questions and exercises; detailed solutions to the exercises; test bank with 10 multiple choice and 5 short answer questions for each module; and PowerPoint slides. Students and instructors can access SPSS data sets; additional exercises; the glossary; and additional information helpful in understanding psychometric concepts. It is ideal as a text for any psychometrics or testing and measurement course taught in psychology, education, marketing, and management. It is also an invaluable reference for professional researchers in need of a quick refresher on applying measurement theory.
Life Events and Emotional Disorder Revisited explores the variety of events that can occur, their inherent characteristics and how they affect our lives and emotions, and in turn their impact on our mental health and wellbeing. The book focuses on current social problems nationally and internationally, showing the reach of life events research including those linked to Covid-19. It also discusses trauma experiences and how they fit in the life events scheme. To underpin the various life event dimensions identified (such as loss, danger and humiliation), the authors have developed an underlying model of human needs, jeopardised by the most damaging life events. This includes attachment, security, identity and achievement. The book brings together classic research findings with new advances in the field of life events research, culminating in a new theoretical framework of life events, including new discussions on trauma, on positive events and an online methodology for measuring them. Additionally, it draws out the clinical implications to apply the research for improved practice. The book will be of interest to researchers, clinicians and students in psychology, psychiatry and psychotherapy in broadening their understanding of how life events impact on individuals and how this can be applied to enhance clinical practice and stimulate future research.
Partial least squares structural equation modelling (PLS-SEM) is becoming a popular statistical framework in many fields and disciplines of the social sciences. The main reason for this popularity is that PLS-SEM can be used to estimate models including latent variables, observed variables, or a combination of these. The popularity of PLS-SEM is predicted to increase even more as a result of the development of new and more robust estimation approaches, such as consistent PLS-SEM. The traditional and modern estimation methods for PLS-SEM are now readily facilitated by both open-source and commercial software packages. This book presents PLS-SEM as a useful practical statistical toolbox that can be used for estimating many different types of research models. In so doing, the authors provide the necessary technical prerequisites and theoretical treatment of various aspects of PLS-SEM prior to practical applications. What makes the book unique is the fact that it thoroughly explains and extensively uses comprehensive Stata (plssem) and R (cSEM and plspm) packages for carrying out PLS-SEM analysis. The book aims to help the reader understand the mechanics behind PLS-SEM as well as performing it for publication purposes. Features: Intuitive and technical explanations of PLS-SEM methods Complete explanations of Stata and R packages Lots of example applications of the methodology Detailed interpretation of software output Reporting of a PLS-SEM study Github repository for supplementary book material The book is primarily aimed at researchers and graduate students from statistics, social science, psychology, and other disciplines. Technical details have been moved from the main body of the text into appendices, but it would be useful if the reader has a solid background in linear regression analysis.
This book introduces a new data analysis technique that addresses long standing criticisms of the current standard statistics. Observation Oriented Modelling presents the mathematics and techniques underlying the new method, discussing causality, modelling, and logical hypothesis testing. Examples of how to approach and interpret data using OOM are presented throughout the book, including analysis of several classic studies in psychology. These analyses are conducted using comprehensive software for the Windows operating system that has been written to accompany the book and will be provided free to book buyers on an accompanying website. The software has a user-friendly interface, similar to SPSS and
SAS, which are the two most commonly used software analysis
packages, and the analysis options are flexible enough to replace
numerous traditional techniques such as t-tests, ANOVA,
correlation, multiple regression, mediation analysis, chi-square
tests, factor analysis, and inter-rater reliability. The output and
graphs generated by the software are also easy to interpret, and
all effect sizes are presented in a common metric; namely, the
number of observations correctly classified by the algorithm. The
software is designed so that undergraduate students in psychology
will have no difficulty learning how to use the software and
interpreting the results of the analyses. * Describes the problems that statistics are meant to answer, why popularly used statistics often fail to fully answer the question, and how OOM overcomes these obstacles * Chapters include examples of statistical analysis using OOM * Software for OOM comes free with the book * Accompanying websiteinclude svideo instruction on OOM use "
This book provides an overview of the innovative, arts-based research method of body mapping and offers a snapshot of the field. The review of body mapping projects by Boydell et al. confirms the potential research and therapeutic benefits associated with body mapping. The book describes a series of body mapping research projects that focus on populations marginalised by disability, mental health status, and other vulnerable identities. Chapters focus on summarising the current state of the art and its application with marginalised groups; analytic strategies for body mapping; highlighting body mapping as a creation and a dissemination process; emerging body mapping techniques including web-based, virtual reality, and wearable technology applications; and measuring the impact of body maps on planning, practice, and behaviour. Contributors and editors include interdisciplinary experts from the fields of psychology, sociology, anthropology, and beyond. Offering innovative ways of engaging in body mapping research, which result in real-world impact, this book is an essential resource for postgraduate students and researchers.
Praise for previous editions: "... a classic with a long history." - Statistical Papers "The fact that the first edition of this book was published in 1971 ... [is] testimony to the book's success over a long period." - ISI Short Book Reviews "... one of the best books available for a theory course on nonparametric statistics. ... very well written and organized ... recommended for teachers and graduate students." - Biometrics "... There is no competitor for this book and its comprehensive development and application of nonparametric methods. Users of one of the earlier editions should certainly consider upgrading to this new edition." - Technometrics "... Useful to students and research workers ... a good textbook for a beginning graduate-level course in nonparametric statistics." - Journal of the American Statistical Association Since its first publication in 1971, Nonparametric Statistical Inference has been widely regarded as the source for learning about nonparametrics. The Sixth Edition carries on this tradition and incorporates computer solutions based on R. Features Covers the most commonly used nonparametric procedures States the assumptions, develops the theory behind the procedures, and illustrates the techniques using realistic examples from the social, behavioral, and life sciences Presents tests of hypotheses, confidence-interval estimation, sample size determination, power, and comparisons of competing procedures Includes an Appendix of user-friendly tables needed for solutions to all data-oriented examples Gives examples of computer applications based on R, MINITAB, STATXACT, and SAS Lists over 100 new references Nonparametric Statistical Inference, Sixth Edition, has been thoroughly revised and rewritten to make it more readable and reader-friendly. All of the R solutions are new and make this book much more useful for applications in modern times. It has been updated throughout and contains 100 new citations, including some of the most recent, to make it more current and useful for researchers.
Mathematical Theory of Bayesian Statistics introduces the mathematical foundation of Bayesian inference which is well-known to be more accurate in many real-world problems than the maximum likelihood method. Recent research has uncovered several mathematical laws in Bayesian statistics, by which both the generalization loss and the marginal likelihood are estimated even if the posterior distribution cannot be approximated by any normal distribution. Features Explains Bayesian inference not subjectively but objectively. Provides a mathematical framework for conventional Bayesian theorems. Introduces and proves new theorems. Cross validation and information criteria of Bayesian statistics are studied from the mathematical point of view. Illustrates applications to several statistical problems, for example, model selection, hyperparameter optimization, and hypothesis tests. This book provides basic introductions for students, researchers, and users of Bayesian statistics, as well as applied mathematicians. Author Sumio Watanabe is a professor of Department of Mathematical and Computing Science at Tokyo Institute of Technology. He studies the relationship between algebraic geometry and mathematical statistics.
A Single Cohesive Framework of Tools and Procedures for Psychometrics and Assessment Bayesian Psychometric Modeling presents a unified Bayesian approach across traditionally separate families of psychometric models. It shows that Bayesian techniques, as alternatives to conventional approaches, offer distinct and profound advantages in achieving many goals of psychometrics. Adopting a Bayesian approach can aid in unifying seemingly disparate-and sometimes conflicting-ideas and activities in psychometrics. This book explains both how to perform psychometrics using Bayesian methods and why many of the activities in psychometrics align with Bayesian thinking. The first part of the book introduces foundational principles and statistical models, including conceptual issues, normal distribution models, Markov chain Monte Carlo estimation, and regression. Focusing more directly on psychometrics, the second part covers popular psychometric models, including classical test theory, factor analysis, item response theory, latent class analysis, and Bayesian networks. Throughout the book, procedures are illustrated using examples primarily from educational assessments. A supplementary website provides the datasets, WinBUGS code, R code, and Netica files used in the examples.
A New Way of Analyzing Object Data from a Nonparametric Viewpoint Nonparametric Statistics on Manifolds and Their Applications to Object Data Analysis provides one of the first thorough treatments of the theory and methodology for analyzing data on manifolds. It also presents in-depth applications to practical problems arising in a variety of fields, including statistics, medical imaging, computer vision, pattern recognition, and bioinformatics. The book begins with a survey of illustrative examples of object data before moving to a review of concepts from mathematical statistics, differential geometry, and topology. The authors next describe theory and methods for working on various manifolds, giving a historical perspective of concepts from mathematics and statistics. They then present problems from a wide variety of areas, including diffusion tensor imaging, similarity shape analysis, directional data analysis, and projective shape analysis for machine vision. The book concludes with a discussion of current related research and graduate-level teaching topics as well as considerations related to computational statistics. Researchers in diverse fields must combine statistical methodology with concepts from projective geometry, differential geometry, and topology to analyze data objects arising from non-Euclidean object spaces. An expert-driven guide to this approach, this book covers the general nonparametric theory for analyzing data on manifolds, methods for working with specific spaces, and extensive applications to practical research problems. These problems show how object data analysis opens a formidable door to the realm of big data analysis.
This accessible guide offers a concise introduction to the science behind worry in children, summarising research from across psychology to explore the role of worry in a range of circumstances, from everyday worries to those that can seriously impact children's lives. Wilson draws on theories from clinical, developmental and cognitive psychology to explain how children's worry is influenced by both developmental and systemic factors, examining the processes involved in pathological worry in a range of childhood anxiety disorders. Covering topics including different definitions of worry, the influence of children's development on worry, Generalised Anxiety Disorder (GAD) in children, and the role parents play in children's worry, this book offers a new model of worry in children with important implications for prevention and intervention strategies. Understanding Children's Worry is valuable reading for students in clinical, educational and developmental psychology, and professionals in child mental health.
This is the first textbook for psychologists which combines the model comparison method in statistics with a hands-on guide to computer-based analysis and clear explanations of the links between models, hypotheses and experimental designs. Statistics is often seen as a set of cookbook recipes which must be learned by heart. Model comparison, by contrast, provides a mental roadmap that not only gives a deeper level of understanding, but can be used as a general procedure to tackle those problems which can be solved using orthodox statistical methods.Statistics and Experimental Design for Psychologists focusses on the role of Occam's principle, and explains significance testing as a means by which the null and experimental hypotheses are compared using the twin criteria of parsimony and accuracy. This approach is backed up with a strong visual element, including for the first time a clear illustration of what the F-ratio actually does, and why it is so ubiquitous in statistical testing.The book covers the main statistical methods up to multifactorial and repeated measures, ANOVA and the basic experimental designs associated with them. The associated online supplementary material extends this coverage to multiple regression, exploratory factor analysis, power calculations and other more advanced topics, and provides screencasts demonstrating the use of programs on a standard statistical package, SPSS.Of particular value to third year undergraduate as well as graduate students, this book will also have a broad appeal to anyone wanting a deeper understanding of the scientific method.
Reviewing the use of natural light by architects in the era of electricity, this book aims to show that natural light not only remains a potential source of order in architecture, but that natural lighting strategies impose a usefully creative discipline on design. Considering an approach to environmental context that sees light as a critical aspect of place, this book explores current attitudes to natural light by offering a series of in-depth studies of recent projects and the particular lighting issues they have addressed. It gives a more nuanced appraisal of these lighting strategies by setting them within their broader topographic, climatic and cultural contexts.
With recent advances in computing power and the widespread availability of preference, perception and choice data, such as public opinion surveys and legislative voting, the empirical estimation of spatial models using scaling and ideal point estimation methods has never been more accessible.The second edition of Analyzing Spatial Models of Choice and Judgment demonstrates how to estimate and interpret spatial models with a variety of methods using the open-source programming language R. Requiring only basic knowledge of R, the book enables social science researchers to apply the methods to their own data. Also suitable for experienced methodologists, it presents the latest methods for modeling the distances between points. The authors explain the basic theory behind empirical spatial models, then illustrate the estimation technique behind implementing each method, exploring the advantages and limitations while providing visualizations to understand the results. This second edition updates and expands the methods and software discussed in the first edition, including new coverage of methods for ordinal data and anchoring vignettes in surveys, as well as an entire chapter dedicated to Bayesian methods. The second edition is made easier to use by the inclusion of an R package, which provides all data and functions used in the book. David A. Armstrong II is Canada Research Chair in Political Methodology and Associate Professor of Political Science at Western University. His research interests include measurement, Democracy and state repressive action. Ryan Bakker is Reader in Comparative Politics at the University of Essex. His research interests include applied Bayesian modeling, measurement, Western European politics, and EU politics. Royce Carroll is Professor in Comparative Politics at the University of Essex. His research focuses on measurement of ideology and the comparative politics of legislatures and political parties. Christopher Hare is Assistant Professor in Political Science at the University of California, Davis. His research focuses on ideology and voting behavior in US politics, political polarization, and measurement. Keith T. Poole is Philip H. Alston Jr. Distinguished Professor of Political Science at the University of Georgia. His research interests include methodology, US political-economic history, economic growth and entrepreneurship. Howard Rosenthal is Professor of Politics at NYU and Roger Williams Straus Professor of Social Sciences, Emeritus, at Princeton. Rosenthal's research focuses on political economy, American politics and methodology.
R for Political Data Science: A Practical Guide is a handbook for political scientists new to R who want to learn the most useful and common ways to interpret and analyze political data. It was written by political scientists, thinking about the many real-world problems faced in their work. The book has 16 chapters and is organized in three sections. The first, on the use of R, is for those users who are learning R or are migrating from another software. The second section, on econometric models, covers OLS, binary and survival models, panel data, and causal inference. The third section is a data science toolbox of some the most useful tools in the discipline: data imputation, fuzzy merge of large datasets, web mining, quantitative text analysis, network analysis, mapping, spatial cluster analysis, and principal component analysis. Key features: Each chapter has the most up-to-date and simple option available for each task, assuming minimal prerequisites and no previous experience in R Makes extensive use of the Tidyverse, the group of packages that has revolutionized the use of R Provides a step-by-step guide that you can replicate using your own data Includes exercises in every chapter for course use or self-study Focuses on practical-based approaches to statistical inference rather than mathematical formulae Supplemented by an R package, including all data As the title suggests, this book is highly applied in nature, and is designed as a toolbox for the reader. It can be used in methods and data science courses, at both the undergraduate and graduate levels. It will be equally useful for a university student pursuing a PhD, political consultants, or a public official, all of whom need to transform their datasets into substantive and easily interpretable conclusions.
Scientometrics for the Humanities and Social Sciences is the first ever book on scientometrics that deals with the historical development of both quantitative and qualitative data analysis in scientometric studies. It focuses on its applicability in new and emerging areas of inquiry. This important book presents the inherent potential for data mining and analysis of qualitative data in scientometrics. The author provides select cases of scientometric studies in the humanities and social sciences, explaining their research objectives, sources of data and methodologies. It illustrates how data can be gathered not only from prominent online databases and repositories, but also from journals that are not stored in these databases. With the support of specific examples, the book shows how data on demographic variables can be collected to supplement scientometric data. The book deals with a research methodology which has an increasing applicability not only to the study of science, but also to the study of the disciplines in the humanities and social sciences.
Recognised as the most influential publication in the field, ARM facilitates deep understanding of the Rasch model and its practical applications. The authors review the crucial properties of the model and demonstrate its use with examples across the human sciences. Readers will be able to understand and critically evaluate Rasch measurement research, perform their own Rasch analyses and interpret their results. The glossary and illustrations support that understanding, and the accessible approach means that it is ideal for readers without a mathematical background. Highlights of the new edition include:
Intended as a text for graduate courses in measurement, item response theory, (advanced) research methods or quantitative analysis taught in psychology, education, human development, business, and other social and health sciences. Professionals in these areas will also appreciate the book’s accessible introduction.
This volume explores the abiding intellectual inertia in scientific psychology in relation to the discipline's engagement with problematic beliefs and assumptions underlying mainstream research practices, despite repeated critical analyses which reveal the weaknesses, and in some cases complete inappropriateness, of these methods. Such paradigmatic inertia is especially troublesome for a scholarly discipline claiming status as a science. The book offers penetrating analyses of many (albeit not all) of the most important areas where mainstream practices require either compelling justifications for their continuation or adjustments - possibly including abandonment - toward more apposite alternatives. Specific areas of concern addressed in this book include the systemic misinterpretation of statistical knowledge; the prevalence of a conception of measurement at odds with yet purporting to mimic the natural sciences; the continuing widespread reliance on null hypothesis testing; and the continuing resistance within psychology to the explicit incorporation of qualitative methods into its methodological toolbox. Broader level chapters examine mainstream psychology's systemic disregard for critical analysis of its tenets, and the epistemic and ethical problems this has created. This is a vital and engaging resource for researchers across psychology, and those in the wider behavioural and social sciences who have an interest in, or who use, psychological research methods.
This book critically examines the work of a number of pioneers of social psychology, including legendary figures such as Kurt Lewin, Leon Festinger, Muzafer Sherif, Solomon Asch, Stanley Milgram, and Philip Zimbardo. Augustine Brannigan argues that the reliance of these psychologists on experimentation has led to questions around validity and replication of their studies. The author explores new research and archival work relating to these studies and outlines a new approach to experimentation that repudiates the use of deception in human experiments and provides clues to how social psychology can re-articulate its premises and future lines of research. Based on the author's 2004 work The Rise and Fall of Social Psychology, in which he critiques the experimental methods used, the book advocates for a return to qualitative methods to redeem the essential social dimensions of social psychology. Covering famous studies such as the Stanford Prison Experiment, Milgram's studies of obedience, Sherif's Robbers Cave, and Rosenhan's expose of psychiatric institutions, this is essential and fascinating reading for students of social psychology, and the social sciences. It's also of interest to academics and researchers interested in engaging with a critical approach to classical social psychology, with a view to changing the future of this important discipline.
Age, Period and Cohort Effects: Statistical Analysis and the Identification Problem gives a number of perspectives from top methodologists and applied researchers on the best ways to attempt to answer Age-Period-Cohort related questions about society. Age-Period-Cohort (APC) analysis is a fundamental topic for any quantitative social scientist studying individuals over time. At the same time, it is also one of the most misunderstood and underestimated topics in quantitative methods. As such, this book is key reference material for researchers wanting to know how to deal with APC issues appropriately in their statistical modelling. It deals with the identification problem caused by the co-linearity of the three variables, considers why some currently used methods are problematic and suggests ideas for what applied researchers interested in APC analysis should do. Whilst the perspectives are varied, the book provides a unified view of the subject in a reader-friendly way that will be accessible to social scientists with a moderate level of quantitative understanding, across the social and health sciences.
Age, Period and Cohort Effects: Statistical Analysis and the Identification Problem gives a number of perspectives from top methodologists and applied researchers on the best ways to attempt to answer Age-Period-Cohort related questions about society. Age-Period-Cohort (APC) analysis is a fundamental topic for any quantitative social scientist studying individuals over time. At the same time, it is also one of the most misunderstood and underestimated topics in quantitative methods. As such, this book is key reference material for researchers wanting to know how to deal with APC issues appropriately in their statistical modelling. It deals with the identification problem caused by the co-linearity of the three variables, considers why some currently used methods are problematic and suggests ideas for what applied researchers interested in APC analysis should do. Whilst the perspectives are varied, the book provides a unified view of the subject in a reader-friendly way that will be accessible to social scientists with a moderate level of quantitative understanding, across the social and health sciences.
"Yoder and Symons bring decades of work to bear and it shows.... The book is] presented with broad scholarship and conceptual depth." -Roger Bakeman, PhD "This outstanding volume transcends the typical treatment of behavior observation methods in introductory research texts. Yoder and Symons articulate a set of measurement principles that serve as the foundation for behavior observation as a scientific tool." -William E. MacLean Jr., PhD This comprehensive textbook introduces graduate students to the competent conduct of observational research methods and measurement. The unique approach of this book is that the chapters delineate not only the techniques and mechanics of observational methods, but also the theoretical and conceptual underpinnings of these methods. The observational methods presented can be used for both single-subject and group-design perspectives, showing students how and when to use both methodologies. In addition, the authors provide many practical exercises within chapters as well as electronic media files of a sample observation session to code with multiple behavior sampling methods. Key topics:
1. The author is forefront of the application of IRT (classic and innovative methodologies). 2. Covers all IRT in broad brushstrokes in an accessible manner. 3. Includes an abundance of original and secondary sources to facilitate learning, including further reading, simulated datasets, and graphics. |
![]() ![]() You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,186
Discovery Miles 41 860
Research Anthology on Architectures…
Information R Management Association
Hardcover
R13,695
Discovery Miles 136 950
Virtual and Augmented Reality…
Information Reso Management Association
Hardcover
R10,248
Discovery Miles 102 480
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,341
Discovery Miles 13 410
|