![]() |
![]() |
Your cart is empty |
||
Books > Social sciences > Psychology > Psychological methodology
This comprehensive Handbook focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models originated, conceptually and in practical terms. Diverse perspectives on how these models can best be evaluated are also provided. Practical applications provide a realistic account of the issues practitioners face using these models. Disparate elements of the book are linked through editorial sidebars that connect common ideas across chapters, compare and reconcile differences in terminology, and explain variations in mathematical notation. These sidebars help to demonstrate the commonalities that exist across the field. By assembling this critical information, the editors hope to inspire others to use polytomous IRT models in their own research so they too can achieve the type of improved measurement that such models can provide. Part 1 examines the most commonly used polytomous IRT models, major issues that cut across these models, and a common notation for calculating functions for each model. An introduction to IRT software is also provided. Part 2 features distinct approaches to evaluating the effectiveness of polytomous IRT models in various measurement contexts. These chapters appraise evaluation procedures and fit tests and demonstrate how to implement these procedures using IRT software. The final section features groundbreaking applications. Here the goal is to provide solutions to technical problems to allow for the most effective use of these models in measuring educational, psychological, and social science abilities and traits. This section also addresses the major issues encountered when using polytomous IRT models in computerized adaptive testing. Equating test scores across different testing contexts is the focus of the last chapter. The various contexts include personality research, motor performance, health and quality of life indicators, attitudes, and educational achievement. Featuring contributions from the leading authorities, this handbook will appeal to measurement researchers, practitioners, and students who want to apply polytomous IRT models to their own research. It will be of particular interest to education and psychology assessment specialists who develop and use tests and measures in their work, especially researchers in clinical, educational, personality, social, and health psychology. This book also serves as a supplementary text in graduate courses on educational measurement, psychometrics, or item response theory.
Although most people believe that there is little we can do to improve the intelligence we were born with, the brain can be exercised just like any other part of the body. Thought processes and intelligence scoring can be improved by practising different types of testing. This title from IQ expert Philip Carter is a companion volume to the bestselilng IQ and Psychometric Tests, and it includes not only hundreds of practice questions, but also answers but explanations. The broader format allows space for writing answers and making notes, and readers are provided with feedback so that they can assess their own strengths and weaknesses. Topics covered include: verbal aptitude tests, numerical aptitude tests, visual aptitude tests, problem solving tests, personality questionnairesm and advice on adopting the right approach to psychometric testing. The IQ and Psychometric Test Workbook provides an ideal opportunity for anyone to improve their IQ rating, or individual performance at psychometric tests, through continual practice and self-assessment.
Intended to help improve measurement and data collection methods
in the behavioral, social, and medical sciences, this book
demonstrates an expanded and accessible use of Generalizability
Theory (G theory). G theory conceptually models the way in which
the reliability of measurement is ascertained. Sources of score
variation are identified as potential contributors to measurement
error and taken into account accordingly. The authors demonstrate
the powerful potential of G theory by showing how to improve the
quality of any kind of measurement, regardless of the
discipline. Brief overviews of analysis of variance, estimation, and the statistical error model are provided for review. The procedures involved in carrying out a generalizability study using EduG follow, as well as guidance in the interpretation of results. Real-world applications of G theory to the assessment of depression, managerial ability, attitudes, and writing and mathematical skills are then presented. Next, annotated exercises provide an opportunity for readers to use EduG and interpret its results. The book concludes with a review of the development of G theory and possible new directions of application. Finally, for those with a strong statistical background, the appendixes provide the formulas used by EduG.
This fully updated new edition not only provides an introduction to a range of advanced statistical techniques that are used in psychology, but has been expanded to include new chapters describing methods and examples of particular interest to medical researchers. It takes a very practical approach, aimed at enabling readers to begin using the methods to tackle their own problems. This book provides a non-mathematical introduction to multivariate methods, with an emphasis on helping the reader gain an intuitive understanding of what each method is for, what it does and how it does it. The first chapter briefly reviews the main concepts of univariate and bivariate methods and provides an overview of the multivariate methods that will be discussed, bringing out the relationships among them, and summarising how to recognise what types of problem each of them may be appropriate for tackling. In the remaining chapters, introductions to the methods and important conceptual points are followed by the presentation of typical applications from psychology and medicine, using examples with fabricated data. Instructions on how to do the analyses and how to make sense of the results are fully illustrated with dialogue boxes and output tables from SPSS, as well as details of how to interpret and report the output, and extracts of SPSS syntax and code from relevant SAS procedures. This book gets students started, and prepares them to approach more comprehensive treatments with confidence. This makes it an ideal text for psychology students, medical students and students or academics in any discipline that uses multivariate methods.
Intended to help improve measurement and data collection methods
in the behavioral, social, and medical sciences, this book
demonstrates an expanded and accessible use of Generalizability
Theory (G theory). G theory conceptually models the way in which
the reliability of measurement is ascertained. Sources of score
variation are identified as potential contributors to measurement
error and taken into account accordingly. The authors demonstrate
the powerful potential of G theory by showing how to improve the
quality of any kind of measurement, regardless of the
discipline. Brief overviews of analysis of variance, estimation, and the statistical error model are provided for review. The procedures involved in carrying out a generalizability study using EduG follow, as well as guidance in the interpretation of results. Real-world applications of G theory to the assessment of depression, managerial ability, attitudes, and writing and mathematical skills are then presented. Next, annotated exercises provide an opportunity for readers to use EduG and interpret its results. The book concludes with a review of the development of G theory and possible new directions of application. Finally, for those with a strong statistical background, the appendixes provide the formulas used by EduG.
Feminist research is informed by a history of breaking silences, of demanding that women's voices be heard, recorded and included in wider intellectual genealogies and histories. This has led to an emphasis on voice and speaking out in the research endeavour. Moments of secrecy and silence are less often addressed. This gives rise to a number of questions. What are the silences, secrets, omissions and and political consequences of such moments? What particular dilemmas and constraints do they represent or entail? What are their implications for research praxis? Are such moments always indicative of voicelessness or powerlessness? Or may they also constitute a productive moment in the research encounter? Contributors to this volume were invited to reflect on these questions. The resulting chapters are a fascinating collection of insights into the research process, making an important contribution to theoretical and empirical debates about epistemology, subjectivity and identity in research. Researchers often face difficult dilemmas about who to represent and how, what to omit and what to include. This book explores such questions in an important and timely collection of essays from international scholars.
Age-Period-Cohort analysis has a wide range of applications, from chronic disease incidence and mortality data in public health and epidemiology, to many social events (birth, death, marriage, etc) in social sciences and demography, and most recently investment, healthcare and pension contribution in economics and finance. Although APC analysis has been studied for the past 40 years and a lot of methods have been developed, the identification problem has been a major hurdle in analyzing APC data, where the regression model has multiple estimators, leading to indetermination of parameters and temporal trends. A Practical Guide to Age-Period Cohort Analysis: The Identification Problem and Beyond provides practitioners a guide to using APC models as well as offers graduate students and researchers an overview of the current methods for APC analysis while clarifying the confusion of the identification problem by explaining why some methods address the problem well while others do not. Features * Gives a comprehensive and in-depth review of models and methods in APC analysis. * Provides an in-depth explanation of the identification problem and statistical approaches to addressing the problem and clarifying the confusion. * Utilizes real data sets to illustrate different data issues that have not been addressed in the literature, including unequal intervals in age and period groups, etc. Contains step-by-step modeling instruction and R programs to demonstrate how to conduct APC analysis and how to conduct prediction for the future Reflects the most recent development in APC modeling and analysis including the intrinsic estimator Wenjiang Fu is a professor of statistics at the University of Houston. Professor Fu's research interests include modeling big data, applied statistics research in health and human genome studies, and analysis of complex economic and social science data.
Requiring no prior training, Modern Statistics for the Social and Behavioral Sciences provides a two-semester, graduate-level introduction to basic statistical techniques that takes into account recent advances and insights that are typically ignored in an introductory course. Hundreds of journal articles make it clear that basic techniques, routinely taught and used, can perform poorly when dealing with skewed distributions, outliers, heteroscedasticity (unequal variances) and curvature. Methods for dealing with these concerns have been derived and can provide a deeper, more accurate and more nuanced understanding of data. A conceptual basis is provided for understanding when and why standard methods can have poor power and yield misleading measures of effect size. Modern techniques for dealing with known concerns are described and illustrated. Features: Presents an in-depth description of both classic and modern methods Explains and illustrates why recent advances can provide more power and a deeper understanding of data Provides numerous illustrations using the software R Includes an R package with over 1300 functions Includes a solution manual giving detailed answers to all of the exercises This second edition describes many recent advances relevant to basic techniques. For example, a vast array of new and improved methods is now available for dealing with regression, including substantially improved ANCOVA techniques. The coverage of multiple comparison procedures has been expanded and new ANOVA techniques are described. Rand Wilcox is a professor of psychology at the University of Southern California. He is the author of 13 other statistics books and the creator of the R package WRS. He currently serves as an associate editor for five statistics journals. He is a fellow of the Association for Psychological Science and an elected member of the International Statistical Institute.
"This is a great overview of the field of model-based clustering and classification by one of its leading developers. McNicholas provides a resource that I am certain will be used by researchers in statistics and related disciplines for quite some time. The discussion of mixtures with heavy tails and asymmetric distributions will place this text as the authoritative, modern reference in the mixture modeling literature." (Douglas Steinley, University of Missouri) Mixture Model-Based Classification is the first monograph devoted to mixture model-based approaches to clustering and classification. This is both a book for established researchers and newcomers to the field. A history of mixture models as a tool for classification is provided and Gaussian mixtures are considered extensively, including mixtures of factor analyzers and other approaches for high-dimensional data. Non-Gaussian mixtures are considered, from mixtures with components that parameterize skewness and/or concentration, right up to mixtures of multiple scaled distributions. Several other important topics are considered, including mixture approaches for clustering and classification of longitudinal data as well as discussion about how to define a cluster Paul D. McNicholas is the Canada Research Chair in Computational Statistics at McMaster University, where he is a Professor in the Department of Mathematics and Statistics. His research focuses on the use of mixture model-based approaches for classification, with particular attention to clustering applications, and he has published extensively within the field. He is an associate editor for several journals and has served as a guest editor for a number of special issues on mixture models.
Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new approach and apply it to their own research. The book emphasizes conceptual discussions throughout while relegating more technical intricacies to the chapter appendices. Most chapters compare generalized structured component analysis to partial least squares path modeling to show how the two component-based approaches differ when addressing an identical issue. The authors also offer a free, online software program (GeSCA) and an Excel-based software program (XLSTAT) for implementing the basic features of generalized structured component analysis.
Quantitative Data Analysis for Language Assessment Volume I: Fundamental Techniques is a resource book that presents the most fundamental techniques of quantitative data analysis in the field of language assessment. Each chapter provides an accessible explanation of the selected technique, a review of language assessment studies that have used the technique, and finally, an example of an authentic study that uses the technique. Readers also get a taste of how to apply each technique through the help of supplementary online resources that include sample data sets and guided instructions. Language assessment students, test designers, and researchers should find this a unique reference as it consolidates theory and application of quantitative data analysis in language assessment.
Statistical power analysis has revolutionized the ways in which we conduct and evaluate research. Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling. It answers many practical questions such as:
Points of Reflection encourage readers to stop and test their understanding of the material. Try Me sections test one s ability to apply the material. Troubleshooting Tips help to prevent commonly encountered problems. Exercises reinforce content and Additional Readings provide sources for delving more deeply into selected topics. Numerous examples demonstrate the book s application to a variety of disciplines. Each issue is accompanied by its potential strengths and shortcomings and examples using a variety of software packages (SAS, SPSS, Stata, LISREL, AMOS, and MPlus). Syntax is provided using a single software program to promote continuity but in each case, parallel syntax using the other packages is presented in appendixes. Routines, data sets, syntax files, and links to student versions of software packages are found at www.psypress.com/davey. The worked examples in Part 2 also provide results from a wider set of estimated models. These tables, and accompanying syntax, can be used to estimate statistical power or required sample size for similar problems under a wide range of conditions. Class-tested at Temple, Virginia Tech, and Miami University of Ohio, this brief text is an ideal supplement for graduate courses in applied statistics, statistics II, intermediate or advanced statistics, experimental design, structural equation modeling, power analysis, and research methods taught in departments of psychology, human development, education, sociology, nursing, social work, gerontology and other social and health sciences. The book s applied approach will also appeal to researchers in these areas. Sections covering Fundamentals, Applications, and Extensions are designed to take readers from first steps to mastery.
Multiple Imputation in Practice: With Examples Using IVEware provides practical guidance on multiple imputation analysis, from simple to complex problems using real and simulated data sets. Data sets from cross-sectional, retrospective, prospective and longitudinal studies, randomized clinical trials, complex sample surveys are used to illustrate both simple, and complex analyses. Version 0.3 of IVEware, the software developed by the University of Michigan, is used to illustrate analyses. IVEware can multiply impute missing values, analyze multiply imputed data sets, incorporate complex sample design features, and be used for other statistical analyses framed as missing data problems. IVEware can be used under Windows, Linux, and Mac, and with software packages like SAS, SPSS, Stata, and R, or as a stand-alone tool. This book will be helpful to researchers looking for guidance on the use of multiple imputation to address missing data problems, along with examples of correct analysis techniques.
Estimate and Interpret Results from Ordered Regression Models Ordered Regression Models: Parallel, Partial, and Non-Parallel Alternatives presents regression models for ordinal outcomes, which are variables that have ordered categories but unknown spacing between the categories. The book provides comprehensive coverage of the three major classes of ordered regression models (cumulative, stage, and adjacent) as well as variations based on the application of the parallel regression assumption. The authors first introduce the three "parallel" ordered regression models before covering unconstrained partial, constrained partial, and nonparallel models. They then review existing tests for the parallel regression assumption, propose new variations of several tests, and discuss important practical concerns related to tests of the parallel regression assumption. The book also describes extensions of ordered regression models, including heterogeneous choice models, multilevel ordered models, and the Bayesian approach to ordered regression models. Some chapters include brief examples using Stata and R. This book offers a conceptual framework for understanding ordered regression models based on the probability of interest and the application of the parallel regression assumption. It demonstrates the usefulness of numerous modeling alternatives, showing you how to select the most appropriate model given the type of ordinal outcome and restrictiveness of the parallel assumption for each variable. Web ResourceMore detailed examples are available on a supplementary website. The site also contains JAGS, R, and Stata codes to estimate the models along with syntax to reproduce the results.
Missing Data Analysis in Practice provides practical methods for analyzing missing data along with the heuristic reasoning for understanding the theoretical underpinnings. Drawing on his 25 years of experience researching, teaching, and consulting in quantitative areas, the author presents both frequentist and Bayesian perspectives. He describes easy-to-implement approaches, the underlying assumptions, and practical means for assessing these assumptions. Actual and simulated data sets illustrate important concepts, with the data sets and codes available online. The book underscores the development of missing data methods and their adaptation to practical problems. It mainly focuses on the traditional missing data problem. The author also shows how to use the missing data framework in many other statistical problems, such as measurement error, finite population inference, disclosure limitation, combing information from multiple data sources, and causal inference.
The development of communication as a discipline has resulted in an explosion of scales tapping various aspects of interpersonal, mass, organizational, and instructional communication. This sourcebook brings together scales that measure a variety of important communication constructs. The scales presented are drawn from areas of interpersonal, mass, organizational, and instructional communication--areas in which the use of formal, quantitative scales is particularly well developed. Communication Research Measures reflects the recent important
emphasis on developing and improving the measurement base of the
communication discipline. It results in an equal amount of labor
saved on the part of the scholars, students, and practitioners who
find this book useful, and it contributes in a significant way to
research efforts.
In recent years, a psychological perspective has gained increasing acceptance in the education provided to musicians: teachers, performers, and "creatives" alike. Research in music psychology has revealed how musicians acquire the ability to convey emotional intentions as sounded music, how listeners perceive it as feelings and moods, and how this powerful process relates to social and cultural dynamics. Of course, people who identify as musicians have special interest in these matters. A well-cited volume ever since its initial publication in 2007, Psychology for Musicians is now brought up-to-date in a second edition, particularly in expanding outside the exclusive context of Western formal/academic settings. This new edition draws on insights from recent research in music psychology, combining academic rigor with accessibility to offer readers research-supported ideas that they can readily apply in their musical activities.
This volume contains contributions from 24 internationally known scholars covering a broad spectrum of interests in cross-cultural theory and research. This breadth is reflected in the diversity of the topics covered in the volume, which include theoretical approaches to cross-cultural research, the dimensions of national cultures and their measurement, ecological and economic foundations of culture, cognitive, perceptual and emotional manifestations of culture, and bicultural and intercultural processes. In addition to the individual chapters, the volume contains a dialog among 14 experts in the field on a number of issues of concern in cross-cultural research, including the relation of psychological studies of culture to national development and national policies, the relationship between macro structures of a society and shared cognitions, the integration of structural and process models into a coherent theory of culture, how personal experiences and cultural traditions give rise to intra-cultural variation, whether culture can be validly measured by self-reports, the new challenges that confront cultural psychology, and whether psychology should strive to eliminate culture as an explanatory variable.
Brief and inexpensive, this engaging book helps readers identify and then discard 52 misconceptions about data and statistical summaries. The focus is on major concepts contained in typical undergraduate and graduate courses in statistics, research methods, or quantitative analysis. Fun interactive Internet exercises that further promote undoing the misconceptions are found on the book's website. The author s accessible discussion of each misconception has five parts:
The book's statistical misconceptions are grouped into 12 chapters that match the topics typically taught in introductory/intermediate courses. However, each of the 52 discussions is self-contained, thus allowing the misconceptions to be covered in any order without confusing the reader. Organized and presented in this manner, the book is an ideal supplement for any standard textbook. Statistical Misconceptions is appropriate for courses taught in a variety of disciplines including psychology, medicine, education, nursing, business, and the social sciences. The book also will benefit independent researchers interested in undoing their statistical misconceptions. "
Individual Differences and Personality provides a student-friendly introduction to both classic and cutting-edge research into personality, mood, motivation and intelligence, and their applications in psychology and in fields such as health, education and sporting achievement. Including a new chapter on 'toxic' personality traits, and an additional chapter on applications in real-life settings, this fourth edition has been thoroughly updated and uniquely covers the necessary psychometric methodology needed to understand modern theories. It also develops deep processing and effective learning by encouraging a critical evaluation of both older and modern theories and methodologies, including the Dark Triad, emotional intelligence and psychopathy. Gardner's and hierarchical theories of intelligence, and modern theories of mood and motivation are discussed and evaluated, and the processes which cause people to differ in personality and intelligence are explored in detail. Six chapters provide a non-mathematical grounding in psychometric principles, such as factor analysis, reliability, validity, bias, test-construction and test-use. With self-assessment questions, further reading and a companion website including student and instructor resources, this is the ideal resource for anyone taking modules on personality and individual differences.
Brief and inexpensive, this engaging book helps readers identify and then discard 52 misconceptions about data and statistical summaries. The focus is on major concepts contained in typical undergraduate and graduate courses in statistics, research methods, or quantitative analysis. Fun interactive Internet exercises that further promote undoing the misconceptions are found on the book's website. The author s accessible discussion of each misconception has five parts:
The book's statistical misconceptions are grouped into 12 chapters that match the topics typically taught in introductory/intermediate courses. However, each of the 52 discussions is self-contained, thus allowing the misconceptions to be covered in any order without confusing the reader. Organized and presented in this manner, the book is an ideal supplement for any standard textbook. Statistical Misconceptions is appropriate for courses taught in a variety of disciplines including psychology, medicine, education, nursing, business, and the social sciences. The book also will benefit independent researchers interested in undoing their statistical misconceptions.
This book provides an up-to-date review of commonly undertaken methodological and statistical practices that are sustained, in part, upon sound rationale and justification and, in part, upon unfounded lore. Some examples of these "methodological urban legends," as we refer to them in this book, are characterized by manuscript critiques such as: (a) "your self-report measures suffer from common method bias"; (b) "your item-to-subject ratios are too low"; (c) "you can?t generalize these findings to the real world"; or (d) "your effect sizes are too low." Historically, there is a kernel of truth to most of these legends, but in many cases that truth has been long forgotten, ignored or embellished beyond recognition. This book examines several such legends. Each chapter is organized to address: (a) what the legend is that "we (almost) all know to be true"; (b) what the "kernel of truth" is to each legend; (c) what the myths are that have developed around this kernel of truth; and (d) what the state of the practice should be. This book meets an important need for the accumulation and integration of these methodological and statistical practices.
Drawing on the latest research into memory, information processing and learning, this book helps students to tailor their study techniques to their own particular learning style and psychological make-up. * An exploration of the tools and techniques essential to success in studying and passing examinations. * Suitable for classroom, distance learning, online, or blended learning environments. * Includes questionnaires, activities, key learning points, illustrations, diagrams, flow charts, and mindmaps.
Offering an historical perspective on the development of mental health consultation and community mental health, this book's intent is twofold. First, it describes and evaluates Harvard psychiatrist Gerald Caplan's innovative approach to consultation and related activities with respect to the current and future practice of clinical community, school and organizational psychology. Second, it pays tribute to Caplan whose ideas on prevention, crisis theory, support systems, community mental health, mental health consultation and collaboration and population-orientated psychiatry have influenced the practice of professional psychology and allied fields.; The text is divided into three sections: the first provides background information for the remainder of the volume; the second documents Caplan's influence on the way psychology has been applied in various settings; andthe last considers his contribution's present and past influence. The text is aimed at consultant and practising psychologists, community and school psychology graduates and professionals involved with community mental health services.
This book reviews methods of conceptualizing, measuring, and analyzing interdependent data in developmental and behavioral sciences. Quantitative and developmental experts describe best practices for modeling interdependent data that stem from interactions within families, relationships, and peer groups, for example. Complex models for analyzing longitudinal data, such as growth curves and time series, are also presented. Many contributors are innovators of the techniques and all are able to clearly explain the methodologies and their practical problems including issues of measurement, missing data, power and sample size, and the specific limitations of each method. Featuring a balance between analytic strategies and applications, the book addresses:
This book is intended for graduate students and researchers across the developmental, social, behavioral, and educational sciences. It is an excellent research guide and a valuable resource for advanced methods courses. |
![]() ![]() You may like...
Game Theory and International Relations…
Pierre Allan, Christian Schmidt
Hardcover
R3,502
Discovery Miles 35 020
Stochastic Differential Equations and…
Mounir Zili, Darya V. Filatova
Hardcover
R3,047
Discovery Miles 30 470
Numerical Analysis of Heat and Mass…
J.M.P.Q. Delgado, Antonio Gilson Barbosa Lima, …
Hardcover
R4,614
Discovery Miles 46 140
Deformation and Fracture of Solid-State…
Sanichiro Yoshida
Hardcover
Hiking Beyond Cape Town - 40 Inspiring…
Nina du Plessis, Willie Olivier
Paperback
|