![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This textbook articulates the elements of good craftsmanship in applied microeconomic research and demonstrates its effectiveness with multiple examples from economic literature. Empirical economic research is a combination of several elements: theory, econometric modelling, institutional analysis, data handling, estimation, inference, and interpretation. A large body of work demonstrates how to do many of these things correctly, but to date, there is no central resource available which articulates the essential principles involved and ties them together. In showing how these research elements can be best blended to maximize the credibility and impact of the findings that result, this book presents a basic framework for thinking about craftsmanship. This framework lays out the proper context within which the researcher should view the analysis, involving institutional factors, complementary policy instruments, and competing hypotheses that can influence or explain the phenomena being studied. It also emphasizes the interconnectedness of theory, econometric modeling, data, estimation, inference, and interpretation, arguing that good craftsmanship requires strong links between each. Once the framework has been set, the book devotes a chapter to each element of the analysis, providing robust instruction for each case. Assuming a working knowledge of econometrics, this text is aimed at graduate students and early-career academic researchers as well as empirical economists looking to improve their technique.
This book presents statistical methods for analysis of the duration of events. The primary focus is on models for single-spell data, events in which individual agents are observed for a single duration. Some attention is also given to multiple-spell data. The first part of the book covers model specification, including both structural and reduced form models and models with and without neglected heterogeneity. The book next deals with likelihood based inference about such models, with sections on full and semiparametric specification. A final section treats graphical and numerical methods of specification testing. This is the first published exposition of current econometric methods for the study of duration data.
Standard methods for estimating empirical models in economics and many other fields rely on strong assumptions about functional forms and the distributions of unobserved random variables. Often, it is assumed that functions of interest are linear or that unobserved random variables are normally distributed. Such assumptions simplify estimation and statistical inference but are rarely justified by economic theory or other a priori considerations. Inference based on convenient but incorrect assumptions about functional forms and distributions can be highly misleading. Nonparametric and semiparametric statistical methods provide a way to reduce the strength of the assumptions required for estimation and inference, thereby reducing the opportunities for obtaining misleading results. These methods are applicable to a wide variety of estimation problems in empirical economics and other fields, and they are being used in applied research with increasing frequency. The literature on nonparametric and semiparametric estimation is large and highly technical. This book presents the main ideas underlying a variety of nonparametric and semiparametric methods. It is accessible to graduate students and applied researchers who are familiar with econometric and statistical theory at the level taught in graduate-level courses in leading universities. The book emphasizes ideas instead of technical details and provides as intuitive an exposition as possible. Empirical examples illustrate the methods that are presented. This book updates and greatly expands the author's previous book on semiparametric methods in econometrics. Nearly half of the material is new.
Experimental methods in economics respond to circumstances that are
not completely dictated by accepted theory or outstanding problems.
While the field of economics makes sharp distinctions and produces
precise theory, the work of experimental economics sometimes appear
blurred and may produce results that vary from strong support to
little or partial support of the relevant theory.
Designed to promote students' understanding of econometrics and to build a more operational knowledge of economics through a meaningful combination of words, symbols and ideas. Each chapter commences in the way economists begin new empirical projects--with a question and an economic model--then proceeds to develop a statistical model, select an estimator and outline inference procedures. Contains a copious amount of problems, experimental exercises and case studies.
Using data from the World Values Survey, this book sheds light on the link between happiness and the social group to which one belongs. The work is based on a rigorous statistical analysis of differences in the probability of happiness and life satisfaction between the predominant social group and subordinate groups. The cases of India and South Africa receive deep attention in dedicated chapters on cast and race, with other chapters considering issues such as cultural bias, religion, patriarchy, and gender. An additional chapter offers a global perspective. On top of this, the longitudinal nature of the data facilitates an examination of how world happiness has evolved between 1994 and 2014. This book will be a valuable reference for advanced students, scholars and policymakers involved in development economics, well-being, development geography, and sociology.
This proceedings volume presents the latest scientific research and trends in experimental economics, with particular focus on neuroeconomics. Derived from the 2016 Computational Methods in Experimental Economics (CMEE) conference held in Szczecin, Poland, this book features research and analysis of novel computational methods in neuroeconomics. Neuroeconomics is an interdisciplinary field that combines neuroscience, psychology and economics to build a comprehensive theory of decision making. At its core, neuroeconomics analyzes the decision-making process not only in terms of external conditions or psychological aspects, but also from the neuronal point of view by examining the cerebral conditions of decision making. The application of IT enhances the possibilities of conducting such analyses. Such studies are now performed by software that provides interaction among all the participants and possibilities to register their reactions more accurately. This book examines some of these applications and methods. Featuring contributions on both theory and application, this book is of interest to researchers, students, academics and professionals interested in experimental economics, neuroeconomics and behavioral economics.
The contents of this volume comprise the proceedings of the conference, "Equilibrium theory and applications." Some of the recent developments in general equilibrium theory in the perspective of actual and potential applications are presented. The conference was organized in honor of Jacques Drèze on the occasion of his sixtieth birthday. Held at C.O.R.E., it was also the unanimous recognition, stressed by Gérard Debreu in his Address, of his role as "the architect and builder" of the Center for Operations Research and Econometrics. An introductory address by Gérard Debreu comprises Part 1 of the volume. The rest of the volume is divided into four parts spanning the scope of the conference. Part 2 is on incomplete markets, increasing returns, and information, Part 3 on equilibrium and dynamices, Part 4 on employment, imperfect competition, and macroeconomics, and Part 5 on applied general equilibrium models.
Since the global financial crisis began in 2008-2009, there has been a strong decline in financial markets and investment, and significant economic recession for most developed and emerging economies. Accordingly, new forms of alternative finance, management, control, accounting, trading and investment are being sought. Alternative finance presents challenges intended to stimulate investment and promote economic growth and development, as well as provide a return on investment during turbulent times. This volume aims to provide the reader with a comprehensive understanding of alternative finance in its various forms. It addresses the impact of the financial crisis and the failure of monetary and financial institutions to manage financial markets and handle the recent downturn. It also presents and discusses new research findings associated with alternative forms of investment and finance, and their economic and political implications.
This collection of papers delivered at the Fifth International Symposium in Economic Theory and Econometrics in 1988 is devoted to the estimation and testing of models that impose relatively weak restrictions on the stochastic behaviour of data. Particularly in highly non-linear models, empirical results are very sensitive to the choice of the parametric form of the distribution of the observable variables, and often nonparametric and semiparametric models are a preferable alternative. Methods and applications that do not require string parametric assumptions for their validity, that are based on kernels and on series expansions, and methods for independent and dependent observations are investigated and developed in these essays by renowned econometricians.
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
The 2008 credit crisis started with the failure of one large bank: Lehman Brothers. Since then the focus of both politicians and regulators has been on stabilising the economy and preventing future financial instability. At this juncture, we are at the last stage of future-proofing the financial sector by raising capital requirements and tightening financial regulation. Now the policy agenda needs to concentrate on transforming the banking sector into an engine for growth. Reviving competition in the banking sector after the state interventions of the past years is a key step in this process. This book introduces and explains a relatively new concept in competition measurement: the performance-conduct-structure (PCS) indicator. The key idea behind this measure is that a firm's efficiency is more highly rewarded in terms of market share and profit, the stronger competitive pressure is. The book begins by explaining the financial market's fundamental obstacles to competition presenting a brief survey of the complex relationship between financial stability and competition. The theoretical contributions of Hay and Liu and Boone provide the theoretical underpinning for the PCS indicator, while its application to banking and insurance illustrates its empirical qualities. Finally, this book presents a systematic comparison between the results of this approach and (all) existing methods as applied to 46 countries, over the same sample period. This book presents a comprehensive overview of the knowns and unknowns of financial sector competition for commercial and central bankers, policy-makers, supervisors and academics alike.
This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. In particular, the GAMLSS statistical framework enables flexible regression and smoothing models to be fitted to the data. The GAMLSS model assumes that the response variable has any parametric (continuous, discrete or mixed) distribution which might be heavy- or light-tailed, and positively or negatively skewed. In addition, all the parameters of the distribution (location, scale, shape) can be modelled as linear or smooth functions of explanatory variables. Key Features: Provides a broad overview of flexible regression and smoothing techniques to learn from data whilst also focusing on the practical application of methodology using GAMLSS software in R. Includes a comprehensive collection of real data examples, which reflect the range of problems addressed by GAMLSS models and provide a practical illustration of the process of using flexible GAMLSS models for statistical learning. R code integrated into the text for ease of understanding and replication. Supplemented by a website with code, data and extra materials. This book aims to help readers understand how to learn from data encountered in many fields. It will be useful for practitioners and researchers who wish to understand and use the GAMLSS models to learn from data and also for students who wish to learn GAMLSS through practical examples.
This book presents estimates of the sources of economic growth in Canada. The experimental measures account for the reproducibility of capital inputs in an input-output framework and show that advances in technology are more important for economic growth than previously estimated. Traditional measures of multifactor productivity advance are also presented. Extensive comparisons relate the two approaches to each change and labour productivity. The book will be of interest to macroeconomists studying economic growth, capital accumulation, technical advance, growth accounting, and input-output analysis.
This book addresses the disparities that arise when measuring and modeling societal behavior and progress across the social sciences. It looks at why and how different disciplines and even researchers can use the same data and yet come to different conclusions about equality of opportunity, economic and social mobility, poverty and polarization, and conflict and segregation. Because societal behavior and progress exist only in the context of other key aspects, modeling becomes exponentially more complex as more of these aspects are factored into considerations. The content of this book transcends disciplinary boundaries, providing valuable information on measuring and modeling to economists, sociologists, and political scientists who are interested in data-based analysis of pressing social issues.
Oil and gas industries apply several techniques for assessing and mitigating the risks that are inherent in its operations. In this context, the application of Bayesian Networks (BNs) to risk assessment offers a different probabilistic version of causal reasoning. Introducing probabilistic nature of hazards, conditional probability and Bayesian thinking, it discusses how cause and effect of process hazards can be modelled using BNs and development of large BNs from basic building blocks. Focus is on development of BNs for typical equipment in industry including accident case studies and its usage along with other conventional risk assessment methods. Aimed at professionals in oil and gas industry, safety engineering, risk assessment, this book Brings together basics of Bayesian theory, Bayesian Networks and applications of the same to process safety hazards and risk assessment in the oil and gas industry Presents sequence of steps for setting up the model, populating the model with data and simulating the model for practical cases in a systematic manner Includes a comprehensive list on sources of failure data and tips on modelling and simulation of large and complex networks Presents modelling and simulation of loss of containment of actual equipment in oil and gas industry such as Separator, Storage tanks, Pipeline, Compressor and risk assessments Discusses case studies to demonstrate the practicability of use of Bayesian Network in routine risk assessments
To fully function in today's global real estate industry, students and professionals increasingly need to understand how to implement essential and cutting-edge quantitative techniques. This book presents an easy-to-read guide to applying quantitative analysis in real estate aimed at non-cognate undergraduate and masters students, and meets the requirements of modern professional practice. Through case studies and examples illustrating applications using data sourced from dedicated real estate information providers and major firms in the industry, the book provides an introduction to the foundations underlying statistical data analysis, common data manipulations and understanding descriptive statistics, before gradually building up to more advanced quantitative analysis, modelling and forecasting of real estate markets. Our examples and case studies within the chapters have been specifically compiled for this book and explicitly designed to help the reader acquire a better understanding of the quantitative methods addressed in each chapter. Our objective is to equip readers with the skills needed to confidently carry out their own quantitative analysis and be able to interpret empirical results from academic work and practitioner studies in the field of real estate and in other asset classes. Both undergraduate and masters level students, as well as real estate analysts in the professions, will find this book to be essential reading.
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics. The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models. All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
Financial crises often transmit across geographical borders and different asset classes. Modeling these interactions is empirically challenging, and many of the proposed methods give different results when applied to the same data sets. In this book the authors set out their work on a general framework for modeling the transmission of financial crises using latent factor models. They show how their framework encompasses a number of other empirical contagion models and why the results between the models differ. The book builds a framework which begins from considering contagion in the bond markets during 1997-1998 across a number of countries, and culminates in a model which encompasses multiple assets across multiple countries through over a decade of crisis events from East Asia in 1997-1998 to the sub prime crisis during 2008. Program code to support implementation of similar models is available.
This textbook discusses central statistical concepts and their use in business and economics. To endure the hardship of abstract statistical thinking, business and economics students need to see interesting applications at an early stage. Accordingly, the book predominantly focuses on exercises, several of which draw on simple applications of non-linear theory. The main body presents central ideas in a simple, straightforward manner; the exposition is concise, without sacrificing rigor. The book bridges the gap between theory and applications, with most exercises formulated in an economic context. Its simplicity of style makes the book suitable for students at any level, and every chapter starts out with simple problems. Several exercises, however, are more challenging, as they are devoted to the discussion of non-trivial economic problems where statistics plays a central part.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
Market Analysis for Real Estate is a comprehensive introduction to how real estate markets work and the analytical tools and techniques that can be used to identify and interpret market signals. The markets for space and varied property assets, including residential, office, retail, and industrial, are presented, analyzed, and integrated into a complete understanding of the role of real estate markets within the workings of contemporary urban economies. Unlike other books on market analysis, the economic and financial theory in this book is rigorous and well integrated with the specifics of the real estate market. Furthermore, it is thoroughly explained as it assumes no previous coursework in economics or finance on the part of the reader. The theoretical discussion is backed up with numerous real estate case study examples and problems, which are presented throughout the text to assist both student and teacher. Including discussion questions, exercises, several web links, and online slides, this textbook is suitable for use on a variety of degree programs in real estate, finance, business, planning, and economics at undergraduate and MSc/MBA level. It is also a useful primer for professionals in these disciplines.
The new research method presented in this book ensures that all economic theories are falsifiable and that irrefutable theories are scientifically sound. Figueroa combines the logically consistent aspects of Popperian and process epistemologies in his alpha-beta method to address the widespread problem of too-general empirical research methods used in economics. He argues that scientific rules can be applied to economics to make sense of society, but that they must address the complexity of reality as well as the simplicity of the abstract on which hard sciences can rely. Furthermore, because the alpha-beta method combines approaches to address the difficulties of scientifically analyzing complex society, it also extends to other social sciences that have historically relied on empirical methods. This groundbreaking Pivot is ideal for students and researchers dedicated to promoting the progress of scientific research in all social sciences. |
You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
|