![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > General
This collection of papers delivered at the Fifth International Symposium in Economic Theory and Econometrics in 1988 is devoted to the estimation and testing of models that impose relatively weak restrictions on the stochastic behaviour of data. Particularly in highly non-linear models, empirical results are very sensitive to the choice of the parametric form of the distribution of the observable variables, and often nonparametric and semiparametric models are a preferable alternative. Methods and applications that do not require string parametric assumptions for their validity, that are based on kernels and on series expansions, and methods for independent and dependent observations are investigated and developed in these essays by renowned econometricians.
The 2008 credit crisis started with the failure of one large bank: Lehman Brothers. Since then the focus of both politicians and regulators has been on stabilising the economy and preventing future financial instability. At this juncture, we are at the last stage of future-proofing the financial sector by raising capital requirements and tightening financial regulation. Now the policy agenda needs to concentrate on transforming the banking sector into an engine for growth. Reviving competition in the banking sector after the state interventions of the past years is a key step in this process. This book introduces and explains a relatively new concept in competition measurement: the performance-conduct-structure (PCS) indicator. The key idea behind this measure is that a firm's efficiency is more highly rewarded in terms of market share and profit, the stronger competitive pressure is. The book begins by explaining the financial market's fundamental obstacles to competition presenting a brief survey of the complex relationship between financial stability and competition. The theoretical contributions of Hay and Liu and Boone provide the theoretical underpinning for the PCS indicator, while its application to banking and insurance illustrates its empirical qualities. Finally, this book presents a systematic comparison between the results of this approach and (all) existing methods as applied to 46 countries, over the same sample period. This book presents a comprehensive overview of the knowns and unknowns of financial sector competition for commercial and central bankers, policy-makers, supervisors and academics alike.
This book presents estimates of the sources of economic growth in Canada. The experimental measures account for the reproducibility of capital inputs in an input-output framework and show that advances in technology are more important for economic growth than previously estimated. Traditional measures of multifactor productivity advance are also presented. Extensive comparisons relate the two approaches to each change and labour productivity. The book will be of interest to macroeconomists studying economic growth, capital accumulation, technical advance, growth accounting, and input-output analysis.
This book addresses the disparities that arise when measuring and modeling societal behavior and progress across the social sciences. It looks at why and how different disciplines and even researchers can use the same data and yet come to different conclusions about equality of opportunity, economic and social mobility, poverty and polarization, and conflict and segregation. Because societal behavior and progress exist only in the context of other key aspects, modeling becomes exponentially more complex as more of these aspects are factored into considerations. The content of this book transcends disciplinary boundaries, providing valuable information on measuring and modeling to economists, sociologists, and political scientists who are interested in data-based analysis of pressing social issues.
To fully function in today's global real estate industry, students and professionals increasingly need to understand how to implement essential and cutting-edge quantitative techniques. This book presents an easy-to-read guide to applying quantitative analysis in real estate aimed at non-cognate undergraduate and masters students, and meets the requirements of modern professional practice. Through case studies and examples illustrating applications using data sourced from dedicated real estate information providers and major firms in the industry, the book provides an introduction to the foundations underlying statistical data analysis, common data manipulations and understanding descriptive statistics, before gradually building up to more advanced quantitative analysis, modelling and forecasting of real estate markets. Our examples and case studies within the chapters have been specifically compiled for this book and explicitly designed to help the reader acquire a better understanding of the quantitative methods addressed in each chapter. Our objective is to equip readers with the skills needed to confidently carry out their own quantitative analysis and be able to interpret empirical results from academic work and practitioner studies in the field of real estate and in other asset classes. Both undergraduate and masters level students, as well as real estate analysts in the professions, will find this book to be essential reading.
This book aims to fill the gap between panel data econometrics textbooks, and the latest development on 'big data', especially large-dimensional panel data econometrics. It introduces important research questions in large panels, including testing for cross-sectional dependence, estimation of factor-augmented panel data models, structural breaks in panels and group patterns in panels. To tackle these high dimensional issues, some techniques used in Machine Learning approaches are also illustrated. Moreover, the Monte Carlo experiments, and empirical examples are also utilised to show how to implement these new inference methods. Large-Dimensional Panel Data Econometrics: Testing, Estimation and Structural Changes also introduces new research questions and results in recent literature in this field.
This proceedings volume presents the latest scientific research and trends in experimental economics, with particular focus on neuroeconomics. Derived from the 2016 Computational Methods in Experimental Economics (CMEE) conference held in Szczecin, Poland, this book features research and analysis of novel computational methods in neuroeconomics. Neuroeconomics is an interdisciplinary field that combines neuroscience, psychology and economics to build a comprehensive theory of decision making. At its core, neuroeconomics analyzes the decision-making process not only in terms of external conditions or psychological aspects, but also from the neuronal point of view by examining the cerebral conditions of decision making. The application of IT enhances the possibilities of conducting such analyses. Such studies are now performed by software that provides interaction among all the participants and possibilities to register their reactions more accurately. This book examines some of these applications and methods. Featuring contributions on both theory and application, this book is of interest to researchers, students, academics and professionals interested in experimental economics, neuroeconomics and behavioral economics.
Financial crises often transmit across geographical borders and different asset classes. Modeling these interactions is empirically challenging, and many of the proposed methods give different results when applied to the same data sets. In this book the authors set out their work on a general framework for modeling the transmission of financial crises using latent factor models. They show how their framework encompasses a number of other empirical contagion models and why the results between the models differ. The book builds a framework which begins from considering contagion in the bond markets during 1997-1998 across a number of countries, and culminates in a model which encompasses multiple assets across multiple countries through over a decade of crisis events from East Asia in 1997-1998 to the sub prime crisis during 2008. Program code to support implementation of similar models is available.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
Market Analysis for Real Estate is a comprehensive introduction to how real estate markets work and the analytical tools and techniques that can be used to identify and interpret market signals. The markets for space and varied property assets, including residential, office, retail, and industrial, are presented, analyzed, and integrated into a complete understanding of the role of real estate markets within the workings of contemporary urban economies. Unlike other books on market analysis, the economic and financial theory in this book is rigorous and well integrated with the specifics of the real estate market. Furthermore, it is thoroughly explained as it assumes no previous coursework in economics or finance on the part of the reader. The theoretical discussion is backed up with numerous real estate case study examples and problems, which are presented throughout the text to assist both student and teacher. Including discussion questions, exercises, several web links, and online slides, this textbook is suitable for use on a variety of degree programs in real estate, finance, business, planning, and economics at undergraduate and MSc/MBA level. It is also a useful primer for professionals in these disciplines.
In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.
This volume of Advances in Econometrics focuses on recent developments in the use of structural econometric models in empirical economics. The papers in this volume are divided in to three broad groups. The first part looks at recent developments in the estimation of dynamic discrete choice models. This includes using new estimation methods for these models based on Euler equations, estimation using sieve approximation of high dimensional state space, the identification of Markov dynamic games with persistent unobserved state variables and developing test of monotone comparative static in models of multiple equilibria. The second part looks at recent advances in the area empirical matching models. The papers in this section look at developing estimators for matching models based on stability conditions, estimating matching surplus functions using generalized entropy functions, solving for the fixed point in the Choo-Siow matching model using a contraction mapping formulation. While the issue of incomplete, or partial identification of model parameters is touched upon in some of the foregoing chapters, two chapters focus on this issue, in the context of testing for monotone comparative statics in models with multiple equilibria, and estimation of supermodular games under the restrictions that players' strategies be rationalizable. The last group of three papers looks at empirical applications using structural econometric models. Two applications applies matching models to solve endogenous matching to the loan spread equation and to endogenize marriage in the collective model of intrahousehold allocation. Another applications looks at market power of condominium developers in the Japanese housing market in the 1990s.
The contents of this volume comprise the proceedings of the International Symposia in Economic Theory and Econometrics conference held in 1987 at the IC^T2 (Innovation, Creativity, and Capital) Institute at the University of Texas at Austin. The essays present fundamental new research on the analysis of complicated outcomes in relatively simple macroeconomic models. The book covers econometric modelling and time series analysis techniques in five parts. Part I focuses on sunspot equilibria, the study of uncertainty generated by nonstochastic economic models. Part II examines the more traditional examples of deterministic chaos: bubbles, instability, and hyperinflation. Part III contains the most current literature dealing with empirical tests for chaos and strange attractors. Part IV deals with chaos and informational complexity. Part V, Nonlinear Econometric Modelling, includes tests for and applications of nonlinearity.
Emphasizing the impact of computer software and computational technology on econometric theory and development, this text presents recent advances in the application of computerized tools to econometric techniques and practicesaEURO"focusing on current innovations in Monte Carlo simulation, computer-aided testing, model selection, and Bayesian methodology for improved econometric analyses.
The new research method presented in this book ensures that all economic theories are falsifiable and that irrefutable theories are scientifically sound. Figueroa combines the logically consistent aspects of Popperian and process epistemologies in his alpha-beta method to address the widespread problem of too-general empirical research methods used in economics. He argues that scientific rules can be applied to economics to make sense of society, but that they must address the complexity of reality as well as the simplicity of the abstract on which hard sciences can rely. Furthermore, because the alpha-beta method combines approaches to address the difficulties of scientifically analyzing complex society, it also extends to other social sciences that have historically relied on empirical methods. This groundbreaking Pivot is ideal for students and researchers dedicated to promoting the progress of scientific research in all social sciences.
This book covers recent advances in efficiency evaluations, most notably Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA) methods. It introduces the underlying theories, shows how to make the relevant calculations and discusses applications. The aim is to make the reader aware of the pros and cons of the different methods and to show how to use these methods in both standard and non-standard cases. Several software packages have been developed to solve some of the most common DEA and SFA models. This book relies on R, a free, open source software environment for statistical computing and graphics. This enables the reader to solve not only standard problems, but also many other problem variants. Using R, one can focus on understanding the context and developing a good model. One is not restricted to predefined model variants and to a one-size-fits-all approach. To facilitate the use of R, the authors have developed an R package called Benchmarking, which implements the main methods within both DEA and SFA. The book uses mathematical formulations of models and assumptions, but it de-emphasizes the formal proofs - in part by placing them in appendices -- or by referring to the original sources. Moreover, the book emphasizes the usage of the theories and the interpretations of the mathematical formulations. It includes a series of small examples, graphical illustrations, simple extensions and questions to think about. Also, it combines the formal models with less formal economic and organizational thinking. Last but not least it discusses some larger applications with significant practical impacts, including the design of benchmarking-based regulations of energy companies in different European countries, and the development of merger control programs for competition authorities.
Complex-Valued Modeling in Economics and Finance outlines the theory, methodology, and techniques behind modeling economic processes using complex variables theory. The theory of complex variables functions is widely used in many scientific fields, since work with complex variables can appropriately describe different complex real-life processes. Many economic indicators and factors reflecting the properties of the same object can be represented in the form of complex variables. By describing the relationship between various indicators using the functions of these variables, new economic and financial models can be created which are often more accurate than the models of real variables. This book pays critical attention to complex variables production in stock market modeling, modeling illegal economy, time series forecasting, complex auto-aggressive models, and economic dynamics modeling. Very little has been published on this topic and its applications within the fields of economics and finance, and this volume appeals to graduate-level students studying economics, academic researchers in economics and finance, and economists.
This book brings together presentations of some of the fundamental new research that has begun to appear in the areas of dynamic structural modeling, nonlinear structural modeling, time series modeling, nonparametric inference, and chaotic attractor inference. The contents of this volume comprise the proceedings of the third of a conference series entitled International Symposia in Economic Theory and Econometrics. This conference was held at the IC;s2 (Innovation, Creativity and Capital) Institute at the University of Texas at Austin on May 22-23, l986.
This volume is dedicated to two recent intensive areas of research
in the econometrics of panel data, namely nonstationary panels and
dynamic panels. It includes a comprehensive survey of the
nonstationary panel literature including panel unit root tests,
spurious panel regressions and panel cointegration
Over the last decade, dynamical systems theory and related
nonlinear methods have had a major impact on the analysis of time
series data from complex systems. Recent developments in
mathematical methods of state-space reconstruction, time-delay
embedding, and surrogate data analysis, coupled with readily
accessible and powerful computational facilities used in gathering
and processing massive quantities of high-frequency data, have
provided theorists and practitioners unparalleled opportunities for
exploratory data analysis, modelling, forecasting, and
control.
An understanding of the behaviour of financial assets and the evolution of economies has never been as important as today. This book looks at these complex systems from the perspective of the physicist. So called 'econophysics' and its application to finance has made great strides in recent years. Less emphasis has been placed on the broader subject of macroeconomics and many economics students are still taught traditional neo-classical economics. The reader is given a general primer in statistical physics, probability theory, and use of correlation functions. Much of the mathematics that is developed is frequently no longer included in undergraduate physics courses. The statistical physics of Boltzmann and Gibbs is one of the oldest disciplines within physics and it can be argued that it was first applied to ensembles of molecules as opposed to being applied to social agents only by way of historical accident. The authors argue by analogy that the theory can be applied directly to economic systems comprising assemblies of interacting agents. The necessary tools and mathematics are developed in a clear and concise manner. The body of work, now termed econophysics, is then developed. The authors show where traditional methods break down and show how the probability distributions and correlation functions can be properly understood using high frequency data. Recent work by the physics community on risk and market crashes are discussed together with new work on betting markets as well as studies of speculative peaks that occur in housing markets. The second half of the book continues the empirical approach showing how by analogy with thermodynamics, a self-consistent attack can be made on macroeconomics. This leads naturally to economic production functions being equated to entropy functions - a new concept for economists. Issues relating to non-equilibrium naturally arise during the development and application of this approach to economics. These are discussed in the context of superstatistics and adiabatic processes. As a result it does seem ultimately possible to reconcile the approach with non-equilibrium systems, and the ideas are applied to study income and wealth distributions, which with their power law distribution functions have puzzled many researchers ever since Pareto discovered them over 100 years ago. This book takes a pedagogical approach to these topics and is aimed at final year undergraduate and beginning gradaute or post-graduate students in physics, economics, and business. However, the experienced researcher and quant should also find much of interest.
Microsimulation models provide an exciting new tool for analysing the distributional impact and cost of government policy changes. They can also be used to analyse the current or future structure of society. This volume contains papers describing new developments at the frontiers of microsimulation modelling, and draws upon experiences in a wide range of countries. Some papers aim to share with other modellers, experience gained in designing and running microsimulation models and their use in government policy formulation. They also examine issues at the frontiers of the discipline, such as how to include usage of health, education and welfare services in models. Other chapters focus upon describing the innovative new approaches being taken in dynamic microsimulation modelling. They describe some of the policy applications for which dynamic models are being used in Europe, Australia and New Zealand. Topics covered include retirement income modelling, pension reform, the behavioural impact of tax changes, child care demand, and the inclusion of government services within models. Attention is also given to validating the results of models and estimating their statistical reliability.
First published in 1992, The Efficiency of New Issue Markets provides a comprehensive overview of under-pricing and through this assess the efficiency of new issue markets. The book provides a further theoretical development of the adverse selection model of the new issue market and addresses the hypothesis that the method of distribution of new issues has an important bearing on the efficiency of these markets. In doing this, the book tests the efficiency of the Offer for Sale new issue market, which demonstrates the validity of the adverse selection model and contradicts the monopsony power hypothesis. This examines the relative efficiency of the new issue markets which demonstrates the importance of distribution in determining relative efficiency.
This book explores the possibility of using social media data for detecting socio-economic recovery activities. In the last decade, there have been intensive research activities focusing on social media during and after disasters. This approach, which views people's communication on social media as a sensor for real-time situations, has been widely adopted as the "people as sensor" approach. Furthermore, to improve recovery efforts after large-scale disasters, detecting communities' real-time recovery situations is essential, since conventional socio-economic recovery indicators, such as governmental statistics, are not published in real time. Thanks to its timeliness, using social media data can fill the gap. Motivated by this possibility, this book especially focuses on the relationships between people's communication on Twitter and Facebook pages, and socio-economic recovery activities as reflected in the used-car market data and the housing market data in the case of two major disasters: the Great East Japan Earthquake and Tsunami of 2011 and Hurricane Sandy in 2012. The book pursues an interdisciplinary approach, combining e.g. disaster recovery studies, crisis informatics, and economics. In terms of its contributions, firstly, the book sheds light on the "people as sensors" approach for detecting socio-economic recovery activities, which has not been thoroughly studied to date but has the potential to improve situation awareness during the recovery phase. Secondly, the book proposes new socio-economic recovery indicators: used-car market data and housing market data. Thirdly, in the context of using social media during the recovery phase, the results demonstrate the importance of distinguishing between social media data posted both by people who are at or near disaster-stricken areas and by those who are farther away.
This book provides the tools and concepts necessary to study the
behavior of econometric estimators and test statistics in large
samples. An econometric estimator is a solution to an optimization
problem; that is, a problem that requires a body of techniques to
determine a specific solution in a defined set of possible
alternatives that best satisfies a selected object function or set
of constraints. Thus, this highly mathematical book investigates
situations concerning large numbers, in which the assumptions of
the classical linear model fail. Economists, of course, face these
situations often. |
You may like...
The Oxford Handbook of Roman Sculpture
Elise A Friedland, Melanie Grunow Sobocinski, …
Hardcover
R5,458
Discovery Miles 54 580
|