![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Social sciences > Psychology > Psychological methodology
This book reviews the latest techniques in exploratory data mining (EDM) for the analysis of data in the social and behavioral sciences to help researchers assess the predictive value of different combinations of variables in large data sets. Methodological findings and conceptual models that explain reliable EDM techniques for predicting and understanding various risk mechanisms are integrated throughout. Numerous examples illustrate the use of these techniques in practice. Contributors provide insight through hands-on experiences with their own use of EDM techniques in various settings. Readers are also introduced to the most popular EDM software programs. A related website at http://mephisto.unige.ch/pub/edm-book-supplement/offers color versions of the book's figures, a supplemental paper to chapter 3, and R commands for some chapters. The results of EDM analyses can be perilous - they are often taken as predictions with little regard for cross-validating the results. This carelessness can be catastrophic in terms of money lost or patients misdiagnosed. This book addresses these concerns and advocates for the development of checks and balances for EDM analyses. Both the promises and the perils of EDM are addressed. Editors McArdle and Ritschard taught the "Exploratory Data Mining" Advanced Training Institute of the American Psychological Association (APA). All contributors are top researchers from the US and Europe. Organized into two parts--methodology and applications, the techniques covered include decision, regression, and SEM tree models, growth mixture modeling, and time based categorical sequential analysis. Some of the applications of EDM (and the corresponding data) explored include: selection to college based on risky prior academic profiles the decline of cognitive abilities in older persons global perceptions of stress in adulthood predicting mortality from demographics and cognitive abilities risk factors during pregnancy and the impact on neonatal development Intended as a reference for researchers, methodologists, and advanced students in the social and behavioral sciences including psychology, sociology, business, econometrics, and medicine, interested in learning to apply the latest exploratory data mining techniques. Prerequisites include a basic class in statistics.
This edited volume features cutting-edge topics from the leading researchers in the areas of latent variable modeling. Content highlights include coverage of approaches dealing with missing values, semi-parametric estimation, robust analysis, hierarchical data, factor scores, multi-group analysis, and model testing. New methodological topics are illustrated with real applications. The material presented brings together two traditions: psychometrics and structural equation modeling. Latent Variable and Latent Structure Models' thought-provoking chapters from the leading researchers in the area will help to stimulate ideas for further research for many years to come. This volume will be of interest to researchers and practitioners from a wide variety of disciplines, including biology, business, economics, education, medicine, psychology, sociology, and other social and behavioral sciences. A working knowledge of basic multivariate statistics and measurement theory is assumed.
-Includes helpful forms and templates that readers can access online -Includes coverage of psychological testing in new areas of outpatient practice -Provides detailed case examples
Single Case Research in Schools addresses and examines the variety of cutting-edge issues in single case research (SCR) in educational settings. Featuring simple and practical techniques for aggregating data for evidence-based practices, the book delves into methods of selecting behaviors of interest and measuring them reliably. The latter part of Single Case Research in Schools is devoted to a step-by-step model of using SCR to evaluate practices in schools. This includes considerations such as measurement, date collection, length of phases, design consideratoins, calculating effect size and reliability of measures.
In recent years, scholars in the fields of refugee studies and forced migration have extended their areas of interest and research into the phenomenon of displacement, human response to it, and ways to intervene to assist those affected, increasingly focusing on the emotional and social impact of displacement on refugees and their adjustment to the traumatic experiences. In the process, the positive concept of "psychosocial wellness" was developed as discussed in this volume. In it noted scholars address the strengths and limitations of their investigations, citing examples from their work with refugees from Afghanistan, Cambodia, Vietnam, Palestine, Cuba, Nicaragua, Haiti, Eastern Europe, Bosnia, and Chile. The authors discuss how they define "psychosocial wellness," as well as the issues of sample selection, measurement, reliability and validity, refugee narratives and "voices," and the ability to generalize findings and apply these to other populations. The key question that has guided many of these investigations and underlies the premise of this book is "what happens to an ordinary person who has experienced an extraordinary event?" This volume also highlights the fact that those involved in such research must also deal with their own emotional responses as they hear victims tell of killing, torture, humiliation, and dispossesion. The volume will therefore appeal to practitioners of psychology, psychiatry, social work, nursing, and anthropology. However, its breadth and the evaluation of the strengths and disadvantages of both qualitative and quantitative methods also make it an excellent text for students.
Single Case Research in Schools addresses and examines the variety of cutting-edge issues in single case research (SCR) in educational settings. Featuring simple and practical techniques for aggregating data for evidence-based practices, the book delves into methods of selecting behaviors of interest and measuring them reliably. The latter part of Single Case Research in Schools is devoted to a step-by-step model of using SCR to evaluate practices in schools. This includes considerations such as measurement, date collection, length of phases, design consideratoins, calculating effect size and reliability of measures.
Contemporary Psychometrics features cutting edge chapters organized in four sections: test theory, factor analysis, structural equation modeling, and multivariate analysis. The section on test theory includes topics such as multidimensional item response theory (IRT), the relationship between IRT and factor analysis, estimation and testing of these models, and basic measurement issues that are often neglected. The factor analysis section reviews the history and development of the model, factorial invariance and factor analysis indeterminacy, and Bayesian inference for factor scores and parameter estimates. The section on structural equation modeling (SEM) includes the general algebraic-graphic rules for latent variable SEM, a survey of goodness of fit assessment, SEM resampling methods, a discussion of how to compare correlations between and within independent samples, dynamic factor models based on ARMA time series models, and multi-level factor analysis models for continuous and discrete data. The final section on multivariate analysis includes topics such as dual scaling of ordinal data, model specification and missing data problems in time series models, and a discussion of the themes that run through all multivariate methods. This tour de force through contemporary psychometrics will appeal to advanced students and researchers in the social and behavioral sciences and education, as well as methodologists from other disciplines.
Built around a problem solving theme, this book extends the intermediate and advanced student's expertise to more challenging situations that involve applying statistical methods to real-world problems. Data relevant to these problems are collected and analyzed to provide useful answers. Building on its central problem-solving theme, a large number of data sets arising from real problems are contained in the text and in the exercises provided at the end of each chapter. Answers, or hints to providing answers, are provided in an appendix. Concentrating largely on the established SPSS and the newer S-Plus statistical packages, the author provides a short, end-of-chapter section entitled Computer Hints that helps the student undertake the analyses reported in the chapter using these statistical packages.
This volume presents the first wide-ranging critical review of validity generalization (VG)--a method that has dominated the field since the publication of Schmidt and Hunter's (1977) paper "Development of a General Solution to the Problem of Validity Generalization." This paper and the work that followed had a profound impact on the science and practice of applied psychology. The research suggests that fundamental relationships among tests and criteria, and the constructs they represent are simpler and more regular than they appear. Looking at the history of the VG model and its impact on personnel psychology, top scholars and leading researchers of the field review the accomplishments of the model, as well as the continuing controversies. Several chapters significantly extend the maximum likelihood estimation with existing models for meta analysis and VG. Reviewing 25 years of progress in the field, this volume shows how the model can be extended and applied to new problems and domains. This book will be important to researchers and graduate students in the areas of industrial organizational psychology and statistics.
In this volume, a diverse group of world experts in personality assessment showcase a range of different viewpoints on response distortion. Contributors consider what it means to "fake" a personality assessment, why and how people try to obtain particular scores on personality tests, and what types of tests people can successfully manipulate. The authors present and discuss the usefulness of a range of traditional and cutting-edge methods for detecting and controlling the practice of faking. These methods include social desirability (lie) scales, warnings, affective neutralization, unidimensional and multidimensional pairwise preferences, decision trees, linguistic analysis, situational measures, and methods based on item response theory. The wide range of viewpoints presented in this book are then summarized, synthesized, and evaluated. The authors make practical recommendations and suggest areas for future research. Anyone who wonders whether people exaggerate or lie outright on personality tests -- or questions what psychologists can and should do about it -- will find in this book stimulating questions and useful answers.
Notwithstanding the mythical demise of "introspection," self-observation has always been an integral aspect of the social sciences. In the century following the "behavioral revolution," psychology has seen a reduction not so much in the frequency as in the rigor with which self-observation is practiced. A great deal of self-observation has been renamed or obscured (as, for example, "self-report"), but this has served only to defer and impoverish important theoretical and technical work. This volume, which contributes to the development of a rigorous theory of self-observation, is organized around three general objectives: to re-animate a discourse on self-observation through a historical analysis of various self-observation traditions; to outline and begin to address some of the unique theoretical challenges of self-observation; and to elaborate some of the technical and practical details necessary for realizing a program of research dedicated to self-observation. In the first section of the book, three historians of psychology trace the evolution of self-observation. In the second, three scholars who are currently working in contemporary traditions of self-observation discuss the basic theoretical and practical challenges involved in conducting self-observation research. In the final two sections of the book, scholars from the phenomenological and narrative traditions trace the history, theory, and practice of self-observation in their respective traditions. Self-Observation in the Social Sciences continues the fine tradition set by Transaction's History and Theory of Psychology series edited by Jaan Valsiner. It is of interest to psychologists and to those who study methodology within the social sciences.
The study of intuition and its relation to thoughtful reasoning is a burgeoning research topic in psychology and beyond. While the area has the potential to radically transform our conception of the mind and decision making, the procedures used for establishing empirical conclusions have often been vaguely formulated and obscure. This book fills a gap in the field by providing a range of methods for exploring intuition experimentally and thereby enhancing the collection of new data. The book begins by summarizing current challenges in the study of intuition and gives a new foundation for intuition research. Going beyond classical dual-process models, a new scheme is introduced to classify the different types of processes usually collected under the label of intuition. These new classifications range from learning approaches to complex cue integration models. The book then goes on to describe the wide variety of behavioural methods available to investigate these processes, including information search tracing, think aloud protocols, maximum likelihood methods, eye-tracking, and physiological and non-physiological measures of affective responses. It also discusses paradigms to investigate implicit associations and causal intuitions, video-based approaches to expert research, methods to induce specific decision modes as well as questionnaires to assess individual preferences for intuition or deliberation. By uniquely providing the basis for exploring intuition by introducing the different methods and their applications in a step-by-step manner, this text is an invaluable reference for individual research projects. It is also very useful as a course book for advanced decision making courses, and could inspire experimental explorations of intuition in psychology, behavioural economics, empirical legal studies and clinical decision making.
Age-Period-Cohort Analysis: New Models, Methods, and Empirical Applications is based on a decade of the authors' collaborative work in age-period-cohort (APC) analysis. Within a single, consistent HAPC-GLMM statistical modeling framework, the authors synthesize APC models and methods for three research designs: age-by-time period tables of population rates or proportions, repeated cross-section sample surveys, and accelerated longitudinal panel studies. The authors show how the empirical application of the models to various problems leads to many fascinating findings on how outcome variables develop along the age, period, and cohort dimensions. The book makes two essential contributions to quantitative studies of time-related change. Through the introduction of the GLMM framework, it shows how innovative estimation methods and new model specifications can be used to tackle the "model identification problem" that has hampered the development and empirical application of APC analysis. The book also addresses the major criticism against APC analysis by explaining the use of new models within the GLMM framework to uncover mechanisms underlying age patterns and temporal trends. Encompassing both methodological expositions and empirical studies, this book explores the ways in which statistical models, methods, and research designs can be used to open new possibilities for APC analysis. It compares new and existing models and methods and provides useful guidelines on how to conduct APC analysis. For empirical illustrations, the text incorporates examples from a variety of disciplines, such as sociology, demography, and epidemiology. Along with details on empirical analyses, software and programs to estimate the models are available on the book's web page.
Psychologists are under increasing pressure to demonstrate the ecological validity of their assessment procedures--to show that the recommendations concluding their evaluations are relevant to urgent concerns in the legal and social policy arenas, such as predicting dangerousness, awarding compensation, and choosing a custodial parent. How much damage does a referred patient have? Who or what "caused" the damage? What impact will it have on his or her future life, work, and family? And what can be done to remediate the damage? The purpose of this book is to provide sound objective methods for answering these questions. It integrates the knowledge of experienced practitioners who offer state-of-the-art summaries of the best current approaches to evaluating difficult cases with that of basic theorists who describe emerging methods in both predictive and inferential statistics, such as Bayesian networks, that have proven their value in other scientific fields. Arguably, the enterprise of psychological assessment is so interdependent with that of data analysis that attempts to make inferences without consideration of statistical implications is malpractice. Prediction in Forensic and Neuropsychology: Sound Statistical Practices clarifies the process of hypothesis testing and helps to push the clinical interpretation of psychological data into the 21st century. It constitutes a vital resource for all the stakeholders in the assessment process--practitioners, researchers, attorneys, and policymakers.
This book describes a series of ground-breaking residential workshops in therapeutic counselling in the 1960s, for people working in mental health and social care disciplines seeking to expand and deepen their reach. The work is unique in the scope of its research into the process and outcomes of such active immersive enquiry in this area. Besides a wealth of more systematic features, the author invites us into the initial conversations in the meeting room, and then follows the group members back into their lives, allowing us to see both early outcomes and the impact of participation up to ten years later. Finally, Barrett-Lennard reflects on the extended history of the intensive workshops and the related group work in other contexts they led into. He makes a compelling argument that such an intensive participatory process is as powerful today as it was in the 1960s. The blend of rich qualitative and empirical data and theory is a unique strength. It will be a great resource for students and scholars in applied psychology and psychotherapy, as well as for practicing therapists and trainees committed to meaningful work with their client groups.
This new edited volume features contributions from many of the leading scientists in probability and statistics from the latter part of the 20th century. It is the only book to assemble the views of these leading scientists--the pioneers in their respective fields. Stochastic Musings features contributions by: *Sir David Cox on statistics and econometrics; *C.R. Rao, M.B. Rao, and D.N. Shanbhag on convex sets of multivariate distributions and their extreme points; *Bradley Efron on the future of statistics; *David Freedman on regression association and causation; *Vic Barnett on sample ordering for effective statistical inference with particular reference to environmental issues; *David Bartholomew on a unified statistical approach to some measurement problems in the social sciences; *Joe Gani on scanning a lattice for a particular pattern; *Leslie Kish on new paradigms for probability sampling (his last paper); *Samuel Kotz and Norman L. Johnson on limit distributions of uncorrelated but dependent distributions on the unit square; *Samuel Kotz and Saralees Nadarajah on some new elliptical distributions; *Jef Teugels on the life span of a renewal; *Wolfgang Urfer and Katharina Emrich on maximum likelihood estimates of genetic effects; and **Vladimir M. Zolotarev on convergence rate estimates in functional limit theorems. The volume also includes the following contributions by faculty members of the Department of Statistics, Athens University of Economics and Business: *J. Panaretos, E. Xekalaki, and S. Psarakis on a predictive model evaluation and selection approach--the correlated gamma ratio distribution; *J. Panaretos and Z. Tsourti on extreme value index estimators and smoothing alternatives; *E. Xekalaki and D. Karlis on mixtures everywhere; and * Ir. Moustaki on latent variable models with covariates. Stochastic Musings will appeal to researchers, professionals, and students interested in the history and development of statistics and probability as well as in related areas, such as physics, biometry, economics, and mathematics. Academic and professional statisticians will benefit from the book's coverage of the latest developments in the field, as well as reflections on the future directions of the discipline.
Research today demands the application of sophisticated and powerful research tools. Fulfilling this need, The Oxford Handbook of Quantitative Methods in Psychology is the complete tool box to deliver the most valid and generalizable answers to today's complex research questions. It is a one-stop source for learning and reviewing current best-practices in quantitative methods as practiced in the social, behavioral, and educational sciences. Comprising two volumes, this handbook covers a wealth of topics related to quantitative research methods. It begins with essential philosophical and ethical issues related to science and quantitative research. It then addresses core measurement topics before delving into the design of studies. Principal issues related to modern estimation and mathematical modeling are also detailed. Topics in the handbook then segway into the realm of statistical inference and modeling with chapters dedicated to classical approaches as well as modern latent variable approaches. Numerous chapters associated with longitudinal data and more specialized techniques round out this broad selection of topics. Comprehensive, authoritative, and user-friendly, this two-volume set will be an indispensable resource for serious researchers across the social, behavioral, and educational sciences.
Horrified by the Holocaust, social psychologist Stanley Milgram wondered if he could recreate the Holocaust in the laboratory setting. Unabated for more than half a century, his (in)famous results have continued to intrigue scholars. Based on unpublished archival data from Milgram's personal collection, volume one of this two-volume set introduces readers to a behind the scenes account showing how during Milgram's unpublished pilot studies he step-by-step invented his official experimental procedure-how he gradually learnt to transform most ordinary people into willing inflictors of harm. Volume two then illustrates how certain innovators within the Nazi regime used the very same Milgram-like learning techniques that with increasing effectiveness gradually enabled them to also transform most ordinary people into increasingly capable executioners of other men, women, and children. Volume two effectively attempts to capture how step-by-step these Nazi innovators attempted to transform the Fuhrer's wish of a Jewish-free Europe into a frightening reality. By the books' end the reader will gain an insight into how the seemingly undoable can become increasingly doable.
* It is a straightforward, conversational introduction to statistics that delivers exactly what its title promises. * Each chapter begins with a brief overview of a statistic that describes what the statistic does and when to use it, followed by a detailed step-by-step explanation of how the statistic works and exactly what information it provides. * Chapters also include an example of the statistic (or statistics) in use in real-world research, "Worked Examples," "Writing It Up" sections that demonstrate how to write about each statistic, "Wrapping Up and Looking Forward" sections, and practice work problems. * A new chapter on person-centered analyses, including cluster analysis and latent class analysis (LCA) has been added (Chapter 16). * Person-centered analysis is an important alternative to the more commonly used variable-centered analyses (e.g., t tests, ANOVA, regression) and is gaining popularity in social-science research. * The chapter on non-parametric statistics (Chapter 14) was enhanced significantly with in-depth descriptions of Mann-Whitney U, Kruskall-Wallace, and Wilcoxon Signed-Rank analyses. * These non-parametric statistics are important alternatives to statistics that rely on normally distributed data. * This new edition also includes more information about the assumptions of various statistics, including a detailed explanation of the assumptions and consequences of violating the assumptions of regression (Chapter 13). * There is more information provided about the importance of the normal distribution in statistics (Chapters 4 and 7). * Each of the last nine chapters includes an example from the real world of research that employs the statistic, or statistics, covered in the chapter. * Altogether, these improvements provide important foundational information about how inferential statistics work and additional statistical tools that are commonly used by researchers in the social sciences. * The text works as a standalone or as a supplement and covers a range of statistical concepts from descriptive statistics to factor analysis and person-centered analyses.
First published in 1985, Ethical Issues in Psychosurgery examines the continuing debate surrounding the treatment of psychiatric disorder by psychosurgery and its ethical implications. Psychosurgery represents a radical treatment and it therefore raises, in a particularly acute and challenging fashion, questions which are implicit In most therapy. The book offers a focussed study in bioethics, a model for bioethical inquiry, as well as introduction to some of the major problems in bioethics. These range from detailed discussions of informed consent, the sanctity of the brain, and the use of experimental therapies, to wider questions of social contract and professionalization. John Kleinig's balanced and informed treatment of the questions will make this book invaluable not only to those concerned with the philosophy of legal and medical ethics, but also to those in the fields of psychiatric practice and research.
This research volume serves as a comprehensive resource for psychophysiological research on media responses. It addresses the theoretical underpinnings, methodological techniques, and most recent research in this area. It goes beyond current volumes by placing the research techniques within a context of communication processes and effects as a field, and demonstrating how the real-time measurement of physiological responses enhances and complements more traditional measures of psychological effects from media. This volume introduces readers to the theoretical assumptions of psychophysiology as well as the operational details of collecting psychophysiological data. In addition to discussing specific measures, it includes brief reviews of recent experiments that have used psychophysiological measures to study how the brain processes media. It will serve as a valuable reference for media researchers utilizing these methodologies, or for other researchers needing to understand the theories, history, and methods of psychophysiological research.
Modeled after Barbara Byrne's other best-selling structural equation modeling (SEM) books, this practical guide reviews the basic concepts and applications of SEM using Mplus Version 6. The author reviews SEM applications based on actual data taken from her own research. Using non-mathematical language, it is written for the novice SEM user. With each application chapter, the author "walks" the reader through all steps involved in testing the SEM model including:
The first two chapters introduce the fundamental concepts of SEM and important basics of the Mplus program. The remaining chapters focus on SEM applications and include a variety of SEM models presented within the context of three sections: Single-group analyses, Multiple-group analyses, and other important topics, the latter of which includes the multitrait-multimethod, latent growth curve, and multilevel models. Intended for researchers, practitioners, and students who use SEM and Mplus, this book is an ideal resource for graduate level courses on SEM taught in psychology, education, business, and other social and health sciences and/or as a supplement for courses on applied statistics, multivariate statistics, intermediate or advanced statistics, and/or research design. Appropriate for those with limited exposure to SEM or Mplus, a prerequisite of basic statistics through regression analysis is recommended.
This book provides accessible treatment to state-of-the-art approaches to analyzing longitudinal studies. Comprehensive coverage of the most popular analysis tools allows readers to pick and choose the techniques that best fit their research. The analyses are illustrated with examples from major longitudinal data sets including practical information about their content and design. Illustrations from popular software packages offer tips on how to interpret the results. Each chapter features suggested readings for additional study and a list of articles that further illustrate how to implement the analysis and report the results. Syntax examples for several software packages for each of the chapter examples are provided at www.psypress.com/longitudinal-data-analysis . Although many of the examples address health or social science questions related to aging, readers from other disciplines will find the analyses relevant to their work. In addition to demonstrating statistical analysis of longitudinal data, the book shows how to interpret and analyze the results within the context of the research design. The methods covered in this book are applicable to a range of applied problems including short- to long-term longitudinal studies using a range of sample sizes. The book provides non-technical, practical introductions to the concepts and issues relevant to longitudinal analysis. Topics include use of publicly available data sets, weighting and adjusting for complex sampling designs with longitudinal studies, missing data and attrition, measurement issues related to longitudinal research, the use of ANOVA and regression for average change over time, mediation analysis, growth curve models, basic and advanced structural equation models, and survival analysis. An ideal supplement for graduate level courses on data analysis and/or longitudinal modeling taught in psychology, gerontology, public health, human development, family studies, medicine, sociology, social work, and other behavioral, social, and health sciences, this multidisciplinary book will also appeal to researchers in these fields.
WISC-V: Clinical Use and Interpretation, Second Edition provides practical information for clinicians on the selection of subtest measures, along with their proper administration and interpretation. Full Scale IQ is identified as important for predicting relevant behaviors and primary index scores for characterizing the child's strengths and weaknesses. Classroom indicators of low scores on each of these abilities are identified, with suggested interventions, accommodations and instructional strategies for low scorers. Coverage includes ethnic differences for the Full Scale IQ and each primary index score, along with evidence of the profound influence of parental attitudes and expectations. Several other societal and contextual factors relevant to understanding racial/ethnic differences are presented. Two chapters review use of the WISC-V for identifying learning disabilities, testing of individuals with dyslexia, and best-practice recommendations to ensure accurate diagnosis and intervention. Concluding chapters describe advances in the Q-interactive system platform allowing administration of the WISC-V on iPads and other tablets, and how clinicians can tailor assessment using select WISC-V subtests and features.
Haptics technology is being used more and more in different applications, such as in computer games for increased immersion, in surgical simulators to create a realistic environment for training of surgeons, in surgical robotics due to safety issues and in mobile phones to provide feedback from user action. The existence of these applications highlights a clear need to understand performance metrics for haptic interfaces and their implications on device design, use and application. Performance Metrics for Haptic Interfaces aims at meeting this need by establishing standard practices for the evaluation of haptic interfaces and by identifying significant performance metrics. Towards this end, a combined physical and psychophysical experimental methodology is presented. Firstly, existing physical performance measures and device characterization techniques are investigated and described in an illustrative way. Secondly, a wide range of human psychophysical experiments are reviewed and the appropriate ones are applied to haptic interactions. The psychophysical experiments are unified as a systematic and complete evaluation method for haptic interfaces. Finally, synthesis of both evaluation methods is discussed. The metrics provided in this state-of-the-art volume will guide readers in evaluating the performance of any haptic interface. The generic methodology will enable researchers to experimentally assess the suitability of a haptic interface for a specific purpose, to characterize and compare devices quantitatively and to identify possible improvement strategies in the design of a system. |
You may like...
Mission Impossible 6: Fallout
Tom Cruise, Henry Cavill, …
Blu-ray disc
(1)R271 Discovery Miles 2 710
|