Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 11 of 11 matches in All Departments
Sample data alone never suffice to draw conclusions about populations. Inference always requires assumptions about the population and sampling process. Statistical theory has revealed much about how strength of assumptions affects the precision of point estimates, but has had much less to say about how it affects the identification of population parameters. Indeed, it has been commonplace to think of identification as a binary event - a parameter is either identified or not - and to view point identification as a pre-condition for inference. Yet there is enormous scope for fruitful inference using data and assumptions that partially identify population parameters. This book explains why and shows how. The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. One focus is prediction with missing outcome or covariate data. Another is decomposition of finite mixtures, with application to the analysis of contaminated sampling and ecological inference. A third major focus is the analysis of treatment response. Whatever the particular subject under study, the presentation follows a common path. The author first specifies the sampling process generating the available data and asks what may be learned about population parameters using the empirical evidence alone. He then ask how the (typically) setvalued identification regions for these parameters shrink if various assumptions are imposed. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. Conservative nonparametric analysis enables researchers to learn from the available data without imposing untenable assumptions. It enables establishment of a domain of consensus among researchers who may hold disparate beliefs about what assumptions are appropriate. Charles F. Manski is Board of Trustees Professor at Northwestern University. He is author of Identification Problems in the Social Sciences and Analog Estimation Methods in Econometrics. He is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Econometric Society.
The most crucial choice a high school graduate makes is whether to attend college or to go to work. Here is the most sophisticated study of the complexities behind that decision. Based on a unique data set of nearly 23,000 seniors from more than 1,300 high schools who were tracked over several years, the book treats the following questions in detail: Who goes to college? Does low family income prevent some young people from enrolling, or does scholarship aid offset financial need? How important are scholastic aptitude scores, high school class rank, race, and socioeconomic background in determining college applications and admissions? Do test scores predict success in higher education? Using the data from the National Longitudinal Study of the Class of 1972, the authors present a set of interrelated analyses of student and institutional behavior, each focused on a particular aspect of the process of choosing and being chosen by a college. Among their interesting findings: most high school graduates would be admitted to some four-year college of average quality, were they to apply; applicants do not necessarily prefer the highest-quality school; high school class rank and SAT scores are equally important in college admissions; federal scholarship aid has had only a small effect on enrollments at four-year colleges but a much stronger effect on attendance at two-year colleges; the attention paid to SAT scores in admissions is commensurate with the power of the scores in predicting persistence to a degree. This clearly written book is an important source of information on a perpetually interesting topic.
Economists and psychologists have, on the whole, exhibited sharply different perspectives on the elicitation of preferences. Economists, who have made preference the central primitive in their thinking about human behavior, have for the most part rejected elicitation and have instead sought to infer preferences from observations of choice behavior. Psychologists, who have tended to think of preference as a context-determined subjective construct, have embraced elicitation as their dominant approach to measurement. This volume, based on a symposium organized by Daniel McFadden at the University of California at Berkeley, provides a provocative and constructive engagement between economists and psychologists on the elicitation of preferences.
Sample data alone never suffice to draw conclusions about populations. Inference always requires assumptions about the population and sampling process. Statistical theory has revealed much about how strength of assumptions affects the precision of point estimates, but has had much less to say about how it affects the identification of population parameters. Indeed, it has been commonplace to think of identification as a binary event - a parameter is either identified or not - and to view point identification as a pre-condition for inference. Yet there is enormous scope for fruitful inference using data and assumptions that partially identify population parameters. This book explains why and shows how. The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. One focus is prediction with missing outcome or covariate data. Another is decomposition of finite mixtures, with application to the analysis of contaminated sampling and ecological inference. A third major focus is the analysis of treatment response. Whatever the particular subject under study, the presentation follows a common path. The author first specifies the sampling process generating the available data and asks what may be learned about population parameters using the empirical evidence alone. He then ask how the (typically) setvalued identification regions for these parameters shrink if various assumptions are imposed. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. Conservative nonparametric analysis enables researchers to learn from the available data without imposing untenable assumptions. It enables establishment of a domain of consensus among researchers who may hold disparate beliefs about what assumptions are appropriate. Charles F. Manski is Board of Trustees Professor at Northwestern University. He is author of Identification Problems in the Social Sciences and Analog Estimation Methods in Econometrics. He is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Econometric Society.
Economists and psychologists have, on the whole, exhibited sharply different perspectives on the elicitation of preferences. Economists, who have made preference the central primitive in their thinking about human behavior, have for the most part rejected elicitation and have instead sought to infer preferences from observations of choice behavior. Psychologists, who have tended to think of preference as a context-determined subjective construct, have embraced elicitation as their dominant approach to measurement. This volume, based on a symposium organized by Daniel McFadden at the University of California at Berkeley, provides a provocative and constructive engagement between economists and psychologists on the elicitation of preferences.
How cutting-edge economics can improve decision-making methods for doctors Although uncertainty is a common element of patient care, it has largely been overlooked in research on evidence-based medicine. Patient Care under Uncertainty strives to correct this glaring omission. Applying the tools of economics to medical decision making, Charles Manski shows how uncertainty influences every stage, from risk analysis to treatment, and how this can be reasonably confronted. In the language of econometrics, uncertainty refers to the inadequacy of available evidence and knowledge to yield accurate information on outcomes. In the context of health care, a common example is a choice between periodic surveillance or aggressive treatment of patients at risk for a potential disease, such as women prone to breast cancer. While these choices make use of data analysis, Manski demonstrates how statistical imprecision and identification problems often undermine clinical research and practice. Reviewing prevailing practices in contemporary medicine, he discusses the controversy regarding whether clinicians should adhere to evidence-based guidelines or exercise their own judgment. He also critiques the wishful extrapolation of research findings from randomized trials to clinical practice. Exploring ways to make more sensible judgments with available data, to credibly use evidence, and to better train clinicians, Manski helps practitioners and patients face uncertainties honestly. He concludes by examining patient care from a public health perspective and the management of uncertainty in drug approvals. Rigorously interrogating current practices in medicine, Patient Care under Uncertainty explains why predictability in the field has been limited and furnishes criteria for more cogent steps forward.
Public policy advocates routinely assert that research has shown a particular policy to be desirable. But how reliable is the analysis in the research they invoke? And how does that analysis affect the way policy is made, on issues ranging from vaccination to minimum wage to FDA drug approval? Charles Manski argues here that current policy is based on untrustworthy analysis. By failing to account for uncertainty in an unpredictable world, policy analysis misleads policy makers with expressions of certitude. "Public Policy in an Uncertain World" critiques the status quo and offers an innovation to improve how policy research is conducted and how policy makers use research. Consumers of policy analysis, whether civil servants, journalists, or concerned citizens, need to understand research methodology well enough to properly assess reported findings. In the current model, policy researchers base their predictions on strong assumptions. But as Manski demonstrates, strong assumptions lead to less credible predictions than weaker ones. His alternative approach takes account of uncertainty and thereby moves policy analysis away from incredible certitude and toward honest portrayal of partial knowledge. Manski describes analysis of research on such topics as the effect of the death penalty on homicide, of unemployment insurance on job-seeking, and of preschooling on high school graduation. And he uses other real-world scenarios to illustrate the course he recommends, in which policy makers form reasonable decisions based on partial knowledge of outcomes, and journalists evaluate research claims more closely, with a skeptical eye toward expressions of certitude."
Economists have long sought to learn the effect of a "treatment" on some outcome of interest, just as doctors do with their patients. A central practical objective of research on treatment response is to provide decision makers with information useful in choosing treatments. Often the decision maker is a social planner who must choose treatments for a heterogeneous population--for example, a physician choosing medical treatments for diverse patients or a judge choosing sentences for convicted offenders. But research on treatment response rarely provides all the information that planners would like to have. How then should planners use the available evidence to choose treatments? This book addresses key aspects of this broad question, exploring and partially resolving pervasive problems of identification and statistical inference that arise when studying treatment response and making treatment choices. Charles Manski addresses the treatment-choice problem directly using Abraham Wald's statistical decision theory, taking into account the ambiguity that arises from identification problems under weak but justifiable assumptions. The book unifies and further develops the influential line of research the author began in the late 1990s. It will be a valuable resource to researchers and upper-level graduate students in economics as well as other social sciences, statistics, epidemiology and related areas of public health, and operations research.
Causal Inferences in Capital Markets Research is an attempt to promote a broad interdisciplinary debate about the notion of causality and the role of causal inference in the social sciences. At the risk of oversimplifying, the issue of causality divides the accounting research community in two polar views: the view that causality is an unattainable ideal for the social sciences and must be given up as a standard, and the view that, on one hand, causality should be the ultimate goal of all scientific endeavors and, on the other hand, theory and causal inference are inextricable. Reflecting and discussing these views was the main motivation for this volume. This volume contains eight articles on three topics: I) Econometrics; III) Accounting, and III) Finance. First, Nancy Cartwright addresses the problem of external validity and the reliability of scientific claims that generalize individual cases. Then, John Rust discusses the role of assumptions in empirical research and the possibility of assumption-free inference. Peter Reiss considers the question how sensitive are instrumental variables to functional form transformations. Finally, Charles Manski studies the logical issues that affect the interpretation of point predictions. Second, Jeremy Bertomeu, Anne Beyer and Daniel Taylor provide a critical overview of empirical accounting research focusing on the benefits of theory-based estimation, while Qi Chen and Katherine Schipper consider the question whether all research should be causal and assess the existing gap between theory and empirical research in accounting. Third, R. Jay Kahn and Toni Whited clarifies and contrasts the notions of identification and causality, whereas Ivo Welch adopts a sociology of science approach to understand the consequences of the researchers' race for discovering novel and surprising results. This volume allows researchers and Ph.D students in accounting, and the social sciences in general, to acquire a deeper understanding of the notion of causality and the nature, limits, and scope of empirical research in the social sciences.
This book provides a language and a set of tools for finding bounds on the predictions that social and behavioral scientists can logically make from nonexperimental and experimental data. The economist Charles Manski draws on examples from criminology, demography, epidemiology, social psychology, and sociology as well as economics to illustrate this language and to demonstrate the broad usefulness of the tools. There are many traditional ways to present identification problems in econometrics, sociology, and psychometrics. Some of these are primarily statistical in nature, using concepts such as flat likelihood functions and nondistinct parameter estimates. Manski's strategy is to divorce identification from purely statistical concepts and to present the logic of identification analysis in ways that are accessible to a wide audience in the social and behavioral sciences. In each case, problems are motivated by real examples with real policy importance, the mathematics is kept to a minimum, and the deductions on identifiability are derived giving fresh insights. Manski begins with the conceptual problem of extrapolating predictions from one population to some new population or to the future. He then analyzes in depth the fundamental selection problem that arises whenever a scientist tries to predict the effects of treatments on outcomes. He carefully specifies assumptions and develops his nonparametric methods of bounding predictions. Manski shows how these tools should be used to investigate common problems such as predicting the effect of family structure on children's outcomes and the effect of policing on crime rates. Successive chapters deal with topics ranging fromthe use of experiments to evaluate social programs, to the use of case-control sampling by epidemiologists studying the association of risk factors and disease, to the use of intentions data by demographers seeking to predict future fertility. The book closes by examining two central identification problems in the analysis of social interactions: the classical simultaneity problem of econometrics and the reflection problem faced in analyses of neighborhood and contextual effects.
This book is a full-scale exposition of Charles Manski's new methodology for analyzing empirical questions in the social sciences. He recommends that researchers first ask what can be learned from data alone, and then ask what can be learned when data are combined with credible weak assumptions. Inferences predicated on weak assumptions, he argues, can achieve wide consensus, while ones that require strong assumptions almost inevitably are subject to sharp disagreements. Building on the foundation laid in the author's "Identification Problems in the Social Sciences" (Harvard, 1995), the book's fifteen chapters are organized in three parts. Part I studies prediction with missing or otherwise incomplete data. Part II concerns the analysis of treatment response, which aims to predict outcomes when alternative treatment rules are applied to a population. Part III studies prediction of choice behavior. Each chapter juxtaposes developments of methodology with empirical or numerical illustrations. The book employs a simple notation and mathematical apparatus, using only basic elements of probability theory.
|
You may like...
|