![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics
"A book perfect for this moment" -Katherine M. O'Regan, Former Assistant Secretary, US Department of Housing and Urban Development More than fifty years after the passage of the Fair Housing Act, American cities remain divided along the very same lines that this landmark legislation explicitly outlawed. Keeping Races in Their Places tells the story of these lines-who drew them, why they drew them, where they drew them, and how they continue to circumscribe residents' opportunities to this very day. Weaving together sophisticated statistical analyses of more than a century's worth of data with an engaging, accessible narrative that brings the numbers to life, Keeping Races in Their Places exposes the entrenched effects of redlining on American communities. This one-of-a-kind contribution to the real estate and urban economics literature applies the author's original geographic information systems analyses to historical maps to reveal redlining's causal role in shaping today's cities. Spanning the era from the Great Migration to the Great Recession, Keeping Races in Their Places uncovers the roots of the Black-white wealth gap, the subprime lending crisis, and today's lack of affordable housing in maps created by banks nearly a century ago. Most of all, it offers hope that with the latest scholarly tools we can pinpoint how things went wrong-and what we must do to make them right.
"A book perfect for this moment" -Katherine M. O'Regan, Former Assistant Secretary, US Department of Housing and Urban Development More than fifty years after the passage of the Fair Housing Act, American cities remain divided along the very same lines that this landmark legislation explicitly outlawed. Keeping Races in Their Places tells the story of these lines-who drew them, why they drew them, where they drew them, and how they continue to circumscribe residents' opportunities to this very day. Weaving together sophisticated statistical analyses of more than a century's worth of data with an engaging, accessible narrative that brings the numbers to life, Keeping Races in Their Places exposes the entrenched effects of redlining on American communities. This one-of-a-kind contribution to the real estate and urban economics literature applies the author's original geographic information systems analyses to historical maps to reveal redlining's causal role in shaping today's cities. Spanning the era from the Great Migration to the Great Recession, Keeping Races in Their Places uncovers the roots of the Black-white wealth gap, the subprime lending crisis, and today's lack of affordable housing in maps created by banks nearly a century ago. Most of all, it offers hope that with the latest scholarly tools we can pinpoint how things went wrong-and what we must do to make them right.
Now in its third edition, Essential Econometric Techniques: A Guide to Concepts and Applications is a concise, student-friendly textbook which provides an introductory grounding in econometrics, with an emphasis on the proper application and interpretation of results. Drawing on the author's extensive teaching experience, this book offers intuitive explanations of concepts such as heteroskedasticity and serial correlation, and provides step-by-step overviews of each key topic. This new edition contains more applications, brings in new material including a dedicated chapter on panel data techniques, and moves the theoretical proofs to appendices. After Chapter 7, students will be able to design and conduct rudimentary econometric research. The next chapters cover multicollinearity, heteroskedasticity, and autocorrelation, followed by techniques for time-series analysis and panel data. Excel data sets for the end-of-chapter problems are available as a digital supplement. A solutions manual is also available for instructors, as well as PowerPoint slides for each chapter. Essential Econometric Techniques shows students how economic hypotheses can be questioned and tested using real-world data, and is the ideal supplementary text for all introductory econometrics courses.
Contains information for using R software with the examples in the textbook Sampling: Design and Analysis, 3rd edition by Sharon L. Lohr.
Applied data-centric social sciences aim to develop both methodology and practical applications of various fields of sciences and businesses with rich data. Specifically, in the social sciences, a vast amount of data on human activities may be useful for understanding collective human nature. In this book, the author introduces several mathematical techniques for handling a huge volume of data and analyzing collective human behavior. The book is constructed from data-oriented investigation, with mathematical methods and expressions used for dealing with data for several specific problems. The fundamental philosophy underlying the book is that both mathematical and physical concepts are determined by the purposes of data analysis. This philosophy is shown throughout exemplar studies of several fields in socio-economic systems. From a data-centric point of view, the author proposes a concept that may change people s minds and cause them to start thinking from the basis of data. Several goals underlie the chapters of the book. The first is to describe mathematical and statistical methods for data analysis, and toward that end the author delineates methods with actual data in each chapter. The second is to find a cyber-physical link between data and data-generating mechanisms, as data are always provided by some kind of data-generating process in the real world. The third goal is to provide an impetus for the concepts and methodology set forth in this book to be applied to socio-economic systems."
This book covers the basics of processing and spectral analysis of monovariate discrete-time signals. The approach is practical, the aim being to acquaint the reader with the indications for and drawbacks of the various methods and to highlight possible misuses. The book is rich in original ideas, visualized in new and illuminating ways, and is structured so that parts can be skipped without loss of continuity. Many examples are included, based on synthetic data and real measurements from the fields of physics, biology, medicine, macroeconomics etc., and a complete set of MATLAB exercises requiring no previous experience of programming is provided. Prior advanced mathematical skills are not needed in order to understand the contents: a good command of basic mathematical analysis is sufficient. Where more advanced mathematical tools are necessary, they are included in an Appendix and presented in an easy-to-follow way. With this book, digital signal processing leaves the domain of engineering to address the needs of scientists and scholars in traditionally less quantitative disciplines, now facing increasing amounts of data.
1. Material on single asset problems, market timing, unconditional and conditional portfolio problems, hedged portfolios. 2. Inference via both Frequentist and Bayesian paradigms. 3. A comprehensive treatment of overoptimism and overfitting of trading strategies. 4. Advice on backtesting strategies. 5. Dozens of examples and hundreds of exercises for self study.
There isn't a book currently on the market which focuses on multiple hypotheses testing. - Can be used on a range of course, including social & behavioral sciences, biological sciences, as well as professional researchers. Includes various examples of the multiple hypotheses method in practice in a variety of fields, including: sport and crime.
Thoroughly updated throughout, A First Course in Linear Model Theory, Second Edition is an intermediate-level statistics text that fills an important gap by presenting the theory of linear statistical models at a level appropriate for senior undergraduate or first-year graduate students. With an innovative approach, the authors introduce to students the mathematical and statistical concepts and tools that form a foundation for studying the theory and applications of both univariate and multivariate linear models. In addition to adding R functionality, this second edition features three new chapters and several sections on new topics that are extremely relevant to the current research in statistical methodology. Revised or expanded topics include linear fixed, random and mixed effects models, generalized linear models, Bayesian and hierarchical linear models, model selection, multiple comparisons, and regularized and robust regression. New to the Second Edition: Coverage of inference for linear models has been expanded into two chapters. Expanded coverage of multiple comparisons, random and mixed effects models, model selection, and missing data. A new chapter on generalized linear models (Chapter 12). A new section on multivariate linear models in Chapter 13, and expanded coverage of the Bayesian linear models and longitudinal models. A new section on regularized regression in Chapter 14. Detailed data illustrations using R. The authors' fresh approach, methodical presentation, wealth of examples, use of R, and introduction to topics beyond the classical theory set this book apart from other texts on linear models. It forms a refreshing and invaluable first step in students' study of advanced linear models, generalized linear models, nonlinear models, and dynamic models.
This is the perfect (and essential) supplement for all econometrics
classes--from a rigorous first undergraduate course, to a first
master's, to a PhD course.
"Students of econometrics and their teachers will find this book to be the best introduction to the subject at the graduate and advanced undergraduate level. Starting with least squares regression, Hayashi provides an elegant exposition of all the standard topics of econometrics, including a detailed discussion of stationary and non-stationary time series. The particular strength of the book is the excellent balance between econometric theory and its applications, using GMM as an organizing principle throughout. Each chapter includes a detailed empirical example taken from classic and current applications of econometrics."--Dale Jorgensen, Harvard University ""Econometrics" will be a very useful book for intermediate and advanced graduate courses. It covers the topics with an easy to understand approach while at the same time offering a rigorous analysis. The computer programming tips and problems should also be useful to students. I highly recommend this book for an up-to-date coverage and thoughtful discussion of topics in the methodology and application of econometrics."--Jerry A. Hausman, Massachusetts Institute of Technology ""Econometrics" covers both modern and classic topics without shifting gears. The coverage is quite advanced yet the presentation is simple. Hayashi brings students to the frontier of applied econometric practice through a careful and efficient discussion of modern economic theory. The empirical exercises are very useful. . . . The projects are carefully crafted and have been thoroughly debugged."--Mark W. Watson, Princeton University ""Econometrics" strikes a good balance between technical rigor and clear exposition. . . . The use of empiricalexamples is well done throughout. I very much like the use of old 'classic' examples. It gives students a sense of history--and shows that great empirical econometrics is a matter of having important ideas and good data, not just fancy new methods. . . . The style is just great, informal and engaging."--James H. Stock, John F. Kennedy School of Government, Harvard University
'Overall, the book is highly technical, including full mathematical proofs of the results stated. Potential readers are post-graduate students or researchers in Quantitative Risk Management willing to have a manual with the state-of-the-art on portfolio diversification and risk aggregation with heavy tails, including the fundamental theorems as well as collateral (but most useful) results on majorization and copula theory.'Quantitative Finance This book offers a unified approach to the study of crises, large fluctuations, dependence and contagion effects in economics and finance. It covers important topics in statistical modeling and estimation, which combine the notions of copulas and heavy tails - two particularly valuable tools of today's research in economics, finance, econometrics and other fields - in order to provide a new way of thinking about such vital problems as diversification of risk and propagation of crises through financial markets due to contagion phenomena, among others. The aim is to arm today's economists with a toolbox suited for analyzing multivariate data with many outliers and with arbitrary dependence patterns. The methods and topics discussed and used in the book include, in particular, majorization theory, heavy-tailed distributions and copula functions - all applied to study robustness of economic, financial and statistical models, and estimation methods to heavy tails and dependence.
Over the course of his professional life, John Maynard Keynes altered his views from free trade in the classical tradition to restricted trade. At the end of his career, his position on the issue was still not categorically resolved even though the evidence seems to suggest that he moved closer to a system of managed trade. In that model, nations would not leave their foreign trade interests open to the vagaries of the free market, but rather exercise some degree of control over them just as they would their domestic economies. Nevertheless, there is no general agreement among economists as to whether Keynes ended his career in the camp of the free traders or aligned himself with the protectionists. John Maynard Keynes: Free Trader or Protectionist? seeks an answer to this question by analyzing Keynes' own views on this issue, as stated in his major publications, letters, speeches, testimony before government bodies, newspaper articles, participation in conferences, and other sources. Through this detailed review of what Keynes himself had to say on the issue as opposed to what others have alleged, this book strives to make a significant contribution to the resolution of this issue.
Much of our thinking is flawed because it is based on faulty intuition. By using the framework and tools of probability and statistics, we can overcome this to provide solutions to many real-world problems and paradoxes. We show how to do this, and find answers that are frequently very contrary to what we might expect. Along the way, we venture into diverse realms and thought experiments which challenge the way that we see the world. Features: An insightful and engaging discussion of some of the key ideas of probabilistic and statistical thinking Many classic and novel problems, paradoxes, and puzzles An exploration of some of the big questions involving the use of choice and reason in an uncertain world The application of probability, statistics, and Bayesian methods to a wide range of subjects, including economics, finance, law, and medicine Exercises, references, and links for those wishing to cross-reference or to probe further Solutions to exercises at the end of the book This book should serve as an invaluable and fascinating resource for university, college, and high school students who wish to extend their reading, as well as for teachers and lecturers who want to liven up their courses while retaining academic rigour. It will also appeal to anyone who wishes to develop skills with numbers or has an interest in the many statistical and other paradoxes that permeate our lives. Indeed, anyone studying the sciences, social sciences, or humanities on a formal or informal basis will enjoy and benefit from this book.
*Furnishes a thorough introduction and detailed information about the linear regression model, including how to understand and interpret its results, test assumptions, and adapt the model when assumptions are not satisfied. *Uses numerous graphs in R to illustrate the model's results, assumptions, and other features. *Does not assume a background in calculus or linear algebra; rather, an introductory statistics course and familiarity with elementary algebra are sufficient. *Provides many examples using real world datasets relevant to various academic disciplines. *Fully integrates the R software environment in its numerous examples.
Upon the backdrop of impressive progress made by the Indian economy during the last two decades after the large-scale economic reforms in the early 1990s, this book evaluates the performance of the economy on some income and non-income dimensions of development at the national, state and sectoral levels. It examines regional economic growth and inequality in income originating from agriculture, industry and services. In view of the importance of the agricultural sector, despite its declining share in gross domestic product, it evaluates the performance of agricultural production and the impact of agricultural reforms on spatial integration of food grain markets. It studies rural poverty, analyzing the trend in employment, the trickle-down process and the inclusiveness of growth in rural India. It also evaluates the impact of microfinance, as an instrument of financial inclusion, on the socio-economic conditions of rural households. Lastly, it examines the relative performance of fifteen major states of India in terms of education, health and human development. An important feature of the book is that it approaches these issues, applying rigorously advanced econometric methods, and focusing primarily on their regional disparities during the post-reform period vis-a-vis the pre-reform period. It offers important results to guide policies for future development.
The development of economics changed dramatically during the twentieth century with the emergence of econometrics, macroeconomics and a more scientific approach in general. One of the key individuals in the transformation of economics was Ragnar Frisch, professor at the University of Oslo and the first Nobel Laureate in economics in 1969. He was a co-founder of the Econometric Society in 1930 (after having coined the word econometrics in 1926) and edited the journal Econometrics for twenty-two years. The discovery of the manuscripts of a series of eight lectures given by Frisch at the Henri Poincare Institute in March-April 1933 on The Problems and Methods of Econometrics will enable economists to more fully understand his overall vision of econometrics. This book is a rare exhibition of Frisch's overview on econometrics and is published here in English for the first time. Edited and with an introduction by Olav Bjerkholt and Ariane Dupont-Kieffer, Frisch's eight lectures provide an accessible and astute discussion of econometric issues from philosophical foundations to practical procedures. Concerning the development of economics in the twentieth century and the broader visions about economic science in general and econometrics in particular held by Ragnar Frisch, this book will appeal to anyone with an interest in the history of economics and econometrics.
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. These data-driven models seek to replace the "classical " parametric models of the past, which were rigid and often linear. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures. They provide a balanced view of new developments in the analysis and modeling of applied sciences with cross-section, time series, panel, and spatial data sets. The major topics of the volume include: the methodology of semiparametric models and special regressor methods; inverse, ill-posed, and well-posed problems; different methodologies related to additive models; sieve regression estimators, nonparametric and semiparametric regression models, and the true error of competing approximate models; support vector machines and their modeling of default probability; series estimation of stochastic processes and some of their applications in Econometrics; identification, estimation, and specification problems in a class of semilinear time series models; nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables; the estimation of a set of regression equations; and a new approach to the analysis of nonparametric models with exogenous treatment assignment.
Advanced and Multivariate Statistical Methods, Seventh Edition provides conceptual and practical information regarding multivariate statistical techniques to students who do not necessarily need technical and/or mathematical expertise in these methods. This text has three main purposes. The first purpose is to facilitate conceptual understanding of multivariate statistical methods by limiting the technical nature of the discussion of those concepts and focusing on their practical applications. The second purpose is to provide students with the skills necessary to interpret research articles that have employed multivariate statistical techniques. Finally, the third purpose of AMSM is to prepare graduate students to apply multivariate statistical methods to the analysis of their own quantitative data or that of their institutions. New to the Seventh Edition All references to SPSS have been updated to Version 27.0 of the software. A brief discussion of practical significance has been added to Chapter 1. New data sets have now been incorporated into the book and are used extensively in the SPSS examples. All the SPSS data sets utilized in this edition are available for download via the companion website. Additional resources on this site include several video tutorials/walk-throughs of the SPSS procedures. These "how-to" videos run approximately 5-10 minutes in length. Advanced and Multivariate Statistical Methods was written for use by students taking a multivariate statistics course as part of a graduate degree program, for example in psychology, education, sociology, criminal justice, social work, mass communication, and nursing.
There isn't a book currently on the market which focuses on multiple hypotheses testing. - Can be used on a range of course, including social & behavioral sciences, biological sciences, as well as professional researchers. Includes various examples of the multiple hypotheses method in practice in a variety of fields, including: sport and crime.
The behaviour of commodity prices never ceases to marvel economists, financial analysts, industry experts, and policymakers. Unexpected swings in commodity prices used to occur infrequently but have now become a permanent feature of global commodity markets. This book is about modelling commodity price shocks. It is intended to provide insights into the theoretical, conceptual, and empirical modelling of the underlying causes of global commodity price shocks. Three main objectives motivated the writing of this book. First, to provide a variety of modelling frameworks for documenting the frequency and intensity of commodity price shocks. Second, to evaluate existing approaches used for forecasting large movements in future commodity prices. Third, to cover a wide range and aspects of global commodities including currencies, rare-hard-lustrous transition metals, agricultural commodities, energy, and health pandemics. Some attempts have already been made towards modelling commodity price shocks. However, most tend to narrowly focus on a subset of commodity markets, i.e., agricultural commodities market and/or the energy market. In this book, the author moves the needle forward by operationalizing different models, which allow researchers to identify the underlying causes and effects of commodity price shocks. Readers also learn about different commodity price forecasting models. The author presents the topics to readers assuming less prior or specialist knowledge. Thus, the book is accessible to industry analysts, researchers, undergraduate and graduate students in economics and financial economics, academic and professional economists, investors, and financial professionals working in different sectors of the commodity markets. Another advantage of the book's approach is that readers are not only exposed to several innovative modelling techniques to add to their modelling toolbox but are also exposed to diverse empirical applications of the techniques presented.
Explains modern SDC techniques for data stewards and develop tools to implement them. Explains the logic behind modern privacy protections for researchers and how they may use publicly released data to generate valid statistical inferences-as well as the limitations imposed by SDC techniques.
This book investigates why economics makes less visible progress over time than scientific fields with a strong practical component, where interactions with physical technologies play a key role. The thesis of the book is that the main impediment to progress in economics is "false feedback", which it defines as the false result of an empirical study, such as empirical evidence produced by a statistical model that violates some of its assumptions. In contrast to scientific fields that work with physical technologies, false feedback is hard to recognize in economics. Economists thus have difficulties knowing where they stand in their inquiries, and false feedback will regularly lead them in the wrong directions. The book searches for the reasons behind the emergence of false feedback. It thereby contributes to a wider discussion in the field of metascience about the practices of researchers when pursuing their daily business. The book thus offers a case study of metascience for the field of empirical economics. The main strength of the book are the numerous smaller insights it provides throughout. The book delves into deep discussions of various theoretical issues, which it illustrates by many applied examples and a wide array of references, especially to philosophy of science. The book puts flesh on complicated and often abstract subjects, particularly when it comes to controversial topics such as p-hacking. The reader gains an understanding of the main challenges present in empirical economic research and also the possible solutions. The main audience of the book are all applied researchers working with data and, in particular, those who have found certain aspects of their research practice problematic.
The book provides an integrated approach to risk sharing, risk spreading and efficient regulation through principal agent models. It emphasizes the role of information asymmetry and risk sharing in contracts as an alternative to transaction cost considerations. It examines how contracting, as an institutional mechanism to conduct transactions, spreads risks while attempting consolidation. It further highlights the shifting emphasis in contracts from Coasian transaction cost saving to risk sharing and shows how it creates difficulties associated with risk spreading, and emphasizes the need for efficient regulation of contracts at various levels. Each of the chapters is structured using a principal agent model, and all chapters incorporate adverse selection (and exogenous randomness) as a result of information asymmetry, as well as moral hazard (and endogenous randomness) due to the self-interest-seeking behavior on the part of the participants. |
![]() ![]() You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,491
Discovery Miles 34 910
Handbook of Experimental Game Theory
C. M. Capra, Rachel T. A. Croson, …
Hardcover
R6,736
Discovery Miles 67 360
|