![]() |
![]() |
Your cart is empty |
||
Books > Social sciences > Psychology > Psychological methodology
Psychiatric clinicians should use rating scales and questionnaires often, for they not only facilitate targeted diagnoses and treatment; they also facilitate links to empirical literature and systematize the entire process of management. Clinically oriented and highly practical, the Handbook of Clinical Rating Scales and Assessment in Psychiatry and Mental Health is an ideal tool for the busy psychiatrist, clinical psychologist, family physician, or social worker. In this ground-breaking text, leading researchers provide reviews of the most commonly used outcome and screening measures for the major psychiatric diagnoses and treatment scenarios. The full range of psychiatric disorders are covered in brief but thorough chapters, each of which provides a concise review of measurement issues related to the relevant condition, along with recommendations on which dimensions to measure - and when. The Handbook also includes ready-to-photocopy versions of the most popular, valid, and reliable scales and checklists, along with scoring keys and links to websites containing on-line versions. Moreover, the Handbook describes well known, structured, diagnostic interviews and the specialized training requirements for each. It also includes details of popular psychological tests (such as neuropsychological, personality, and projective tests), along with practical guidelines on when to request psychological testing, how to discuss the case with the assessment consultant and how to integrate information from the final testing report into treatment. Focused and immensely useful, the Handbook of Clinical Rating Scales and Assessment in Psychiatry and Mental Health is an invaluable resource for all clinicians who care for patients with psychiatric disorders.
Ivan P. Pavlov was a pioneering Russian physiologist whose influence on Russian psychology was politically emphasized in 1930s to 1950s. He was a brilliant experimenter who received 1904 Nobel Prize in Physiology or Medicine for his work on the digestive system. Less is known about his epistemology of generalization that made it possible to study one individual for the sake of obtaining generalized knowledge. In this volume we analyze the major contributions of Pavlov from the standpoint of idiographic science, and demonstrate how generalizations in science are possible from single specimens.
Ivan P. Pavlov was a pioneering Russian physiologist whose influence on Russian psychology was politically emphasized in 1930s to 1950s. He was a brilliant experimenter who received 1904 Nobel Prize in Physiology or Medicine for his work on the digestive system. Less is known about his epistemology of generalization that made it possible to study one individual for the sake of obtaining generalized knowledge. In this volume we analyze the major contributions of Pavlov from the standpoint of idiographic science, and demonstrate how generalizations in science are possible from single specimens.
Featuring contributions from some of the leading researchers in the field of SEM, most chapters are written by the author(s) who originally proposed the technique and/or contributed substantially to its development. Content highlights include latent variable mixture modeling, multilevel modeling, interaction modeling, models for dealing with nonstandard and noncompliance samples, the latest on the analysis of growth curve and longitudinal data, specification searches, item parceling, and equivalent models. This volume will appeal to educators, psychologists, biologists, business professionals, medical researchers, and other social and health scientists. It is assumed that the reader has mastered the equivalent of a graduate-level multivariate statistics course that included coverage of introductory SEM techniques.
Understanding how our brains and bodies actually work is a powerful tool in mitigating the anxiety generated by unpleasant physical and emotional symptoms that we all may experience from time to time. Here, Robert Scaer unravels the complexities of the brain-body connection, equipping all those who are in distress with a plausible explanation for how they feel. Making the science accessible, he outlines the core neurobiological concepts underlying the brain-body interface and explains why physical and emotional symptoms of stress and trauma occur. He explains why "feelings" represent physical sensations that inform us about the nature of our brain-body conflicts. He also offers practical, easy-to-implement strategies for strengthening motor skills, learning to listen to our gut to gauge our feelings, attuning to the present, and restoring personal boundaries to relieve symptoms and navigate a path to recovery.
Understanding Statistics in Psychology with SPSS, eighth edition, offers students a trusted, straightforward, and engaging way of learning to do statistical analyses confidently using SPSS. Comprehensive and practical, the text is organised into short accessible chapters, making it the ideal text for undergraduate psychology students needing to get to grips with statistics in class or independently. Clear diagrams and full colour screenshots from SPSS make the text suitable for beginners while the broad coverage of topics ensures that students can continue to use it as they progress to more advanced techniques. Key features * Combines coverage of statistics with full guidance on how to use SPSS to analyse data. * Suitable for use with all versions of SPSS. * Examples from a wide range of real psychological studies illustrate how statistical techniques are used in practice. * Includes clear and detailed guidance on choosing tests, interpreting findings and reporting and writing up research. * Student-focused pedagogical approach including: o Key concept boxes detailing important terms. o Focus on sections exploring complex topics in greater depth. o Explaining statistics sections clarify important statistical concepts. . Dennis Howitt and Duncan Cramer are with Loughborough University.
In this book, experts in statistics and psychometrics describe classes of linkages, the history of score linkings, data collection designs, and methods used to achieve sound score linkages. They describe and critically discuss applications to a variety of domains. They define what linking is, to distinguish among the varieties of linking and to describe different procedure for linking. Furthermore, they convey the complexity and diversity of linking by covering different areas of linking and providing diverse perspectives.
"Models of Psychological Space" begins the reformulation of the construct of psychological space by bringing together in one volume a sampling of theoretical models from the psychometric, developmental, and experimental approaches. The author also discusses five general issues which cut across these three approaches; namely, age-related differences, sex-related differences, trainability, imagery, processing solutions, and the effect of stimulus dimensionality upon spatial performance. "Models of " "Psychological Space" provides an overview of a significant construct which has many researchable ideas, and which should be of interest to scholars from a wide range of disciplines.
This book examines extensions of the Rasch model, one of the most researched and applied models in educational research and social science. This collection contains 22 chapters by some of the most renowned international experts in the field. They cover topics ranging from general model extensions to applications in fields as diverse as cognition, personality, organizational and sports psychology, and health sciences and education.
This is the second edition of the comprehensive treatment of statistical inference using permutation techniques. It makes available to practitioners a variety of useful and powerful data analytic tools that rely on very few distributional assumptions. Although many of these procedures have appeared in journal articles, they are not readily available to practitioners. This new and updated edition places increased emphasis on the use of alternative permutation statistical tests based on metric Euclidean distance functions that have excellent robustness characteristics. These alternative permutation techniques provide many powerful multivariate tests including multivariate multiple regression analyses.
The study of brain function is one of the most fascinating pursuits of m- ern science. Functional neuroimaging is an important component of much of the current research in cognitive, clinical, and social psychology. The exci- ment of studying the brain is recognized in both the popular press and the scienti?c community. In the pages of mainstream publications, including The New York Times and Wired, readers can learn about cutting-edge research into topics such as understanding how customers react to products and - vertisements ("If your brain has a 'buy button, ' what pushes it?," The New York Times, October19,2004), howviewersrespondtocampaignads("Using M. R. I. 's to see politics on the brain," The New York Times, April 20, 2004; "This is your brain on Hillary: Political neuroscience hits new low," Wired, November 12,2007), howmen and womenreactto sexualstimulation ("Brain scans arouse researchers,"Wired, April 19, 2004), distinguishing lies from the truth ("Duped," The New Yorker, July 2, 2007; "Woman convicted of child abuse hopes fMRI can prove her innocence," Wired, November 5, 2007), and even what separates "cool" people from "nerds" ("If you secretly like Michael Bolton, we'll know," Wired, October 2004). Reports on pathologies such as autism, in which neuroimaging plays a large role, are also common (for - stance, a Time magazine cover story from May 6, 2002, entitled "Inside the world of autism").
WINNER OF THE 2007 DEGROOT PRIZE The prominence of finite mixture modelling is greater than ever. Many important statistical topics like clustering data, outlier treatment, or dealing with unobserved heterogeneity involve finite mixture models in some way or other. The area of potential applications goes beyond simple data analysis and extends to regression analysis and to non-linear time series analysis using Markov switching models. For more than the hundred years since Karl Pearson showed in 1894 how to estimate the five parameters of a mixture of two normal distributions using the method of moments, statistical inference for finite mixture models has been a challenge to everybody who deals with them. In the past ten years, very powerful computational tools emerged for dealing with these models which combine a Bayesian approach with recent Monte simulation techniques based on Markov chains. This book reviews these techniques and covers the most recent advances in the field, among them bridge sampling techniques and reversible jump Markov chain Monte Carlo methods. It is the first time that the Bayesian perspective of finite mixture modelling is systematically presented in book form. It is argued that the Bayesian approach provides much insight in this context and is easily implemented in practice. Although the main focus is on Bayesian inference, the author reviews several frequentist techniques, especially selecting the number of components of a finite mixture model, and discusses some of their shortcomings compared to the Bayesian approach. The aim of this book is to impart the finite mixture and Markov switching approach to statistical modelling to a wide-ranging community. This includes not only statisticians, but also biologists, economists, engineers, financial agents, market researcher, medical researchers or any other frequent user of statistical models. This book should help newcomers to the field to understand how finite mixture and Markov switching models are formulated, what structures they imply on the data, what they could be used for, and how they are estimated. Researchers familiar with the subject also will profit from reading this book. The presentation is rather informal without abandoning mathematical correctness. Previous notions of Bayesian inference and Monte Carlo simulation are useful but not needed.
"Polygraphy;' "lie detection;' and the "detection of deception" are all terms that refer to an application of the science of psychophysiology, which itself employs physiological measures to study and differentiate between psychological processes. The issues raised by polygraphy are controversial. One such issue is whether the polygraph is a genuinely scientifically based application, or merely a purported application, of psychophysiology. Such concerns are of interest not only to polygraph practitioners and to specialists in psychophysiology, but also to such other specialists as those in the legal and forensic professions. Moreover, there are two sorts of nonspecialists who should also be concerned. On the one hand, there are the potential "users" of the polygraph-for example, a manager who employs a polygrapher to check on subordinates; on the other hand, there are those "used by" the polygraph - the employee who is subjected to the poly graphic examination. To begin with the user of the polygraph, this person should know not only about its overall accuracy, but also about the rationales of the various detection methods and their validity for different purposes in different sorts of situations. This infor mation is important, because even for the potential user there are costs as well as benefits. Aside from the lack of trust generated by the polygraph, there have also been successful suits by employees against employers, so there are traps in polygraph usage that employers (and managers) need to keep in mind."
Sample surveys provide data used by researchers in a large range of disciplines to analyze important relationships using well-established and widely used likelihood methods. The methods used to select samples often result in the sample differing in important ways from the target population and standard application of likelihood methods can lead to biased and inefficient estimates. Maximum Likelihood Estimation for Sample Surveys presents an overview of likelihood methods for the analysis of sample survey data that account for the selection methods used, and includes all necessary background material on likelihood inference. It covers a range of data types, including multilevel data, and is illustrated by many worked examples using tractable and widely used models. It also discusses more advanced topics, such as combining data, non-response, and informative sampling. The book presents and develops a likelihood approach for fitting models to sample survey data. It explores and explains how the approach works in tractable though widely used models for which we can make considerable analytic progress. For less tractable models numerical methods are ultimately needed to compute the score and information functions and to compute the maximum likelihood estimates of the model parameters. For these models, the book shows what has to be done conceptually to develop analyses to the point that numerical methods can be applied. Designed for statisticians who are interested in the general theory of statistics, Maximum Likelihood Estimation for Sample Surveys is also aimed at statisticians focused on fitting models to sample survey data, as well as researchers who study relationships among variables and whose sources of data include surveys.
Wim van der Linden was just given a lifetime achievement award by the National Council on Measurement in Education. There is no one more prominent in the area of educational testing. There are hundreds of computer-based credentialing exams in areas such as accounting, real estate, nursing, and securities, as well as the well-known admissions exams for college, graduate school, medical school, and law school - there is great need on the theory of testing. This book presents the statistical theory and practice behind constructing good tests e.g., how is the first test item selected, how are the next items selected, and when do you have enough items.
This textbook is for graduate students and research workers in social statistics and related subject areas. It follows a novel curriculum developed around the basic statistical activities: sampling, measurement and inference. The monograph aims to prepare the reader for the career of an independent social statistician and to serve as a reference for methods, ideas for and ways of studying of human populations. Elementary linear algebra and calculus are prerequisites, although the exposition is quite forgiving. Familiarity with statistical software at the outset is an advantage, but it can be developed while reading the first few chapters.
This book presents the state of the art in multilevel analysis, with an emphasis on more advanced topics. These topics are discussed conceptually, analyzed mathematically, and illustrated by empirical examples. Multilevel analysis is the statistical analysis of hierarchically and non-hierarchically nested data. The simplest example is clustered data, such as a sample of students clustered within schools. Multilevel data are especially prevalent in the social and behavioral sciences and in the biomedical sciences. The chapter authors are all leading experts in the field. Given the omnipresence of multilevel data in the social, behavioral, and biomedical sciences, this book is essential for empirical researchers in these fields.
Psychoanalytic infant observation is frequently used in training psychoanalytic psychotherapists and allied professionals, but increasingly its value as a research method is being recognised, particularly in understanding developmental processes in vulnerable individuals and groups. This book explores the scope of this approach and discusses its strengths and limitations from a methodological and philosophical point of view. Infant Observation and Research uses detailed case studies to demonstrate the research potential of the infant observation method. Divided into three sections this book covers
Throughout the book, Cathy Urwin, Janine Sternberg and their contributors introduce the reader to the nature and value of psychoanalytic infant observation and its range of application. This book will therefore interest a range of mental health practitioners concerned with early development and infants' emotional relationships, as well as academics and researchers in the social sciences and humanities.
This is the first workbook that introduces the multilevel approach to modeling with categorical outcomes using IBM SPSS Version 20. Readers learn how to develop, estimate, and interpret multilevel models with categorical outcomes. The authors walk readers through data management, diagnostic tools, model conceptualization, and model specification issues related to single-level and multilevel models with categorical outcomes. Screen shots clearly demonstrate techniques and navigation of the program. Modeling syntax is provided in the appendix. Examples of various types of categorical outcomes demonstrate how to set up each model and interpret the output. Extended examples illustrate the logic of model development, interpretation of output, the context of the research questions, and the steps around which the analyses are structured. Readers can replicate examples in each chapter by using the corresponding data and syntax files available at www.psypress.com/9781848729568. The book opens with a review of multilevel with categorical outcomes, followed by a chapter on IBM SPSS data management techniques to facilitate working with multilevel and longitudinal data sets. Chapters 3 and 4 detail the basics of the single-level and multilevel generalized linear model for various types of categorical outcomes. These chapters review underlying concepts to assist with trouble-shooting common programming and modeling problems. Next population-average and unit-specific longitudinal models for investigating individual or organizational developmental processes are developed. Chapter 6 focuses on single- and multilevel models using multinomial and ordinal data followed by a chapter on models for count data. The book concludes with additional trouble shooting techniques and tips for expanding on the modeling techniques introduced. Ideal as a supplement for graduate level courses and/or
professional workshops on multilevel, longitudinal, latent variable
modeling, multivariate statistics, and/or advanced quantitative
techniques taught in psychology, business, education, health, and
sociology, this practical workbook also appeals to researchers in
these fields. An excellent follow up to the authors' highly
successful Multilevel and Longitudinal Modeling with IBM SPSS and
Introduction to Multilevel Modeling Techniques, 2nd Edition, this
book can also be used with any multilevel and/or longitudinal book
or as a stand-alone text introducing multilevel modeling with
categorical outcomes.
Behavioral scientists - including those in psychology, infant and child development, education, animal behavior, marketing and usability studies - use many methods to measure behavior. Systematic observation is used to study relatively natural, spontaneous behavior as it unfolds sequentially in time. This book emphasizes digital means to record and code such behavior; while observational methods do not require them, they work better with them. Key topics include devising coding schemes, training observers and assessing reliability, as well as recording, representing and analyzing observational data. In clear and straightforward language, this book provides a thorough grounding in observational methods along with considerable practical advice. It describes standard conventions for sequential data and details how to perform sequential analysis with a computer program developed by the authors. The book is rich with examples of coding schemes and different approaches to sequential analysis, including both statistical and graphical means.
Item response theory has become an essential component in the toolkit of every researcher in the behavioral sciences. It provides a powerful means to study individual responses to a variety of stimuli, and the methodology has been extended and developed to cover many different models of interaction. This volume presents a wide-ranging handbook to item response theory - and its applications to educational and psychological testing. It will serve as both an introduction to the subject and also as a comprehensive reference volume for practitioners and researchers. It is organized into six major sections: the nominal categories model, models for response time or multiple attempts on items, models for multiple abilities or cognitive components, nonparametric models, models for nonmonotone items, and models with special assumptions. Each chapter in the book has been written by an expert of that particular topic, and the chapters have been carefully edited to ensure that a uniform style of notation and presentation is used throughout. As a result, all researchers whose work uses item response theory will find this an indispensable companion to their work and it will be the subject's reference volume for many years to come.
Presenting an innovative take on researching early childhood, this book provides an international comparison of the cultural and familial influences that shape the growth of young children. The book presents a unique methodology, and includes chapters on musicality, security, humour and eating.
The Analytic Network Process (ANP) developed by Thomas Saaty in his work on multicriteria decision making applies network structures with dependence and feedback to complex decision making. This book is a selection of applications of ANP to economic, social and political decisions, and also to technological design. The chapters comprise contributions of scholars, consultants and people concerned about the outcome of certain important decisions who applied the Analytic Network Process to determine the best outcome for each decision from among several potential outcomes. The ANP is a methodological tool that is helpful to organize knowledge and thinking, elicit judgments registered in both in memory and in feelings, quantify the judgments and derive priorities from them, and finally synthesize these diverse priorities into a single mathematically and logically justifiable overall outcome. In the process of deriving this outcome, the ANP also allows for the representation and synthesis of diverse opinions in the midst of discussion and debate, The ANP offers economists a considerably different approach for dealing with economic problems than the usual quantitative models used. The ANP approach is based on absolute scales used to represent pairwise comparison judgments in the context of dominance with respect to a property shared by the homogeneous elements being compared. How much or how many times more does A dominate B with respect to property P? Actually people are able to answer this question by using words to indicate intensity of dominance that all of us are equipped biologically to do all the time (equal, moderate, strong, very strong, and extreme) whose conversion to numbers, validation and extension to inhomogeneous elements form the foundation of the AHP/ANP. Numerous applications of the ANP have been made to economic problems, among which prediction of the turn-around dates for the US economy in the early 1990 s and again in 2001 whose accuracy and validity were both confirmed later in the news. They were based on the process of comparisons of mostly intangible factors rather than on financial, employment and other data and statistics.
The Handbook of Psychodiagnostic Testing is an invaluable aid to students and professionals performing psychological assessments. It takes the reader from client referral to finished report, demonstrating how to synthesize details of personality and pathology into a document that is focused, coherent, and clinically meaningful. This new edition covers emerging areas in borderline and narcissistic pathologies, psychological testing of preschool children, and bilingual populations. It also discusses the most current clinical issues and evaluating populations on which standard psychological tests have not been standardized.
In the decade of the 1970s, item response theory became the dominant topic for study by measurement specialists. But, the genesis of item response theory (IRT) can be traced back to the mid-thirties and early forties. In fact, the term "Item Characteristic Curve," which is one of the main IRT concepts, can be attributed to Ledyard Tucker in 1946. Despite these early research efforts, interest in item response theory lay dormant until the late 1960s and took a backseat to the emerging development of strong true score theory. While true score theory developed rapidly and drew the attention of leading psychometricians, the problems and weaknesses inherent in its formulation began to raise concerns. Such problems as the lack of invariance of item parameters across examinee groups, and the inadequacy of classical test procedures to detect item bias or to provide a sound basis for measurement in "tailored testing," gave rise to a resurgence of interest in item response theory. Impetus for the development of item response theory as we now know it was provided by Frederic M. Lord through his pioneering works (Lord, 1952; 1953a, 1953b). The progress in the fifties was painstakingly slow due to the mathematical complexity of the topic and the nonexistence of computer programs. |
![]() ![]() You may like...
Transportation, Traffic Safety and…
Hans v. Holst, Ake Nygren, …
Paperback
R2,858
Discovery Miles 28 580
Data Analytics on Graphs
Ljubisa Stankovic, Danilo P. Mandic, …
Hardcover
R3,426
Discovery Miles 34 260
Flinovia-Flow Induced Noise and…
Elena Ciappi, Sergio De Rosa, …
Hardcover
R7,003
Discovery Miles 70 030
Intelligent Computing for Interactive…
Parisa Eslambolchilar, Mark Dunlop, …
Hardcover
R2,492
Discovery Miles 24 920
The Economics and Econometrics of the…
Angeliki Menegaki
Paperback
Machine Learning and Biometrics
Jucheng Yang, Dong Sun Park, …
Hardcover
R3,321
Discovery Miles 33 210
|