![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics
This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students working with linear state space models, and who are familiar with linear algebra and possess some knowledge of statistics.
Central Bank Balance Sheet and Real Business Cycles argues that a deeper comprehension of changes to the central bank balance sheet can lead to more effective policymaking. Any transaction engaged in by the central bank-issuing currency, conducting foreign exchange operations, investing its own funds, intervening to provide emergency liquidity assistance and carrying out monetary policy operations-influences its balance sheet. Despite this, many central banks throughout the world have largely ignored balance sheet movements, and have instead focused on implementing interest rates. In this book, Mustapha Abiodun Akinkunmi highlights the challenges and controversies faced by central banks in the past and present when implementing policies, and analyzes the links between these policies, the central bank balance sheet, and the consequences to economies as a whole. He argues that the composition and evolution of the central bank balance sheet provides a valuable basis for understanding the needs of an economy, and is an important tool in developing strategies that would most effectively achieve policy goals. This book is an important resource for anyone interested in monetary policy or whose work is affected by the actions of the policies of central banks.
This is the second volume in a two-part series on frontiers in regional research. It identifies methodological advances as well as trends and future developments in regional systems modelling and open science. Building on recent methodological and modelling advances, as well as on extensive policy-analysis experience, top international regional scientists identify and evaluate emerging new conceptual and methodological trends and directions in regional research. Topics such as dynamic interindustry modelling, computable general equilibrium models, exploratory spatial data analysis, geographic information science, spatial econometrics and other advanced methods are the central focus of this book. The volume provides insights into the latest developments in object orientation, open source, and workflow systems, all in support of open science. It will appeal to a wide readership, from regional scientists and economists to geographers, quantitatively oriented regional planners and other related disciplines. It offers a source of relevant information for academic researchers and policy analysts in government, and is also suitable for advanced teaching courses on regional and spatial science, economics and political science.
Principles of Econometrics, 4th Edition, is an introductory book on economics and finance designed to provide an understanding of why econometrics is necessary, and a working knowledge of basic econometric tools. This latest edition is updated to reflect current state of economic and financial markets and provides new content on Kernel Density Fitting and Analysis of Treatment Effects. It offers new end-of-chapters questions and problems in each chapter; updated comprehensive Glossary of Terms; and summary of Probably and Statistics. The text applies basic econometric tools to modeling, estimation, inference, and forecasting through real world problems and evaluates critically the results and conclusions from others who use basic econometric tools. Furthermore, it provides a foundation and understanding for further study of econometrics and more advanced techniques.
A comprehensive and up-to-date introduction to the mathematics that all economics students need to know Probability theory is the quantitative language used to handle uncertainty and is the foundation of modern statistics. Probability and Statistics for Economists provides graduate and PhD students with an essential introduction to mathematical probability and statistical theory, which are the basis of the methods used in econometrics. This incisive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of the mathematics that every economist needs to know. Covers probability and statistics with mathematical rigor while emphasizing intuitive explanations that are accessible to economics students of all backgrounds Discusses random variables, parametric and multivariate distributions, sampling, the law of large numbers, central limit theory, maximum likelihood estimation, numerical optimization, hypothesis testing, and more Features hundreds of exercises that enable students to learn by doing Includes an in-depth appendix summarizing important mathematical results as well as a wealth of real-world examples Can serve as a core textbook for a first-semester PhD course in econometrics and as a companion book to Bruce E. Hansen's Econometrics Also an invaluable reference for researchers and practitioners
This book focuses on the application of revenue management in the manufacturing industry. Though previous books have extensively studied the application of revenue management in the service industry, little attention has been paid to its application in manufacturing, despite the fact that applying it in this context can be highly profitable and instrumental to corporate success. With this work, the author demonstrates that the manufacturing industry also fulfills the prerequisites for the application of revenue management. The book includes a summary of empirical studies that effectively illustrate how revenue management is currently being applied across Europe and North America, and what the profit potential is.
This text presents modern developments in time series analysis and focuses on their application to economic problems. The book first introduces the fundamental concept of a stationary time series and the basic properties of covariance, investigating the structure and estimation of autoregressive-moving average (ARMA) models and their relations to the covariance structure. The book then moves on to non-stationary time series, highlighting its consequences for modeling and forecasting and presenting standard statistical tests and regressions. Next, the text discusses volatility models and their applications in the analysis of financial market data, focusing on generalized autoregressive conditional heteroskedastic (GARCH) models. The second part of the text devoted to multivariate processes, such as vector autoregressive (VAR) models and structural vector autoregressive (SVAR) models, which have become the main tools in empirical macroeconomics. The text concludes with a discussion of co-integrated models and the Kalman Filter, which is being used with increasing frequency. Mathematically rigorous, yet application-oriented, this self-contained text will help students develop a deeper understanding of theory and better command of the models that are vital to the field. Assuming a basic knowledge of statistics and/or econometrics, this text is best suited for advanced undergraduate and beginning graduate students.
This book is the outcome of the CIMPA School on Statistical Methods and Applications in Insurance and Finance, held in Marrakech and Kelaat M'gouna (Morocco) in April 2013. It presents two lectures and seven refereed papers from the school, offering the reader important insights into key topics. The first of the lectures, by Frederic Viens, addresses risk management via hedging in discrete and continuous time, while the second, by Boualem Djehiche, reviews statistical estimation methods applied to life and disability insurance. The refereed papers offer diverse perspectives and extensive discussions on subjects including optimal control, financial modeling using stochastic differential equations, pricing and hedging of financial derivatives, and sensitivity analysis. Each chapter of the volume includes a comprehensive bibliography to promote further research.
This book provides the tools and concepts necessary to study the
behavior of econometric estimators and test statistics in large
samples. An econometric estimator is a solution to an optimization
problem; that is, a problem that requires a body of techniques to
determine a specific solution in a defined set of possible
alternatives that best satisfies a selected object function or set
of constraints. Thus, this highly mathematical book investigates
situations concerning large numbers, in which the assumptions of
the classical linear model fail. Economists, of course, face these
situations often.
Microeconometrics Using Stata, Second Edition is an invaluable reference for researchers and students interested in applied microeconometric methods. Like previous editions, this text covers all the classic microeconometric techniques ranging from linear models to instrumental-variables regression to panel-data estimation to nonlinear models such as probit, tobit, Poisson, and choice models. Each of these discussions has been updated to show the most modern implementation in Stata, and many include additional explanation of the underlying methods. In addition, the authors introduce readers to performing simulations in Stata and then use simulations to illustrate methods in other parts of the book. They even teach you how to code your own estimators in Stata. The second edition is greatly expanded—the new material is so extensive that the text now comprises two volumes. In addition to the classics, the book now teaches recently developed econometric methods and the methods newly added to Stata. Specifically, the book includes entirely new chapters on duration models randomized control trials and exogenous treatment effects endogenous treatment effects models for endogeneity and heterogeneity, including finite mixture models, structural equation models, and nonlinear mixed-effects models spatial autoregressive models semiparametric regression lasso for prediction and inference Bayesian analysis Anyone interested in learning classic and modern econometric methods will find this the perfect companion. And those who apply these methods to their own data will return to this reference over and over as they need to implement the various techniques described in this book.
Today, most money is credit money, created by commercial banks. While credit can finance innovation, excessive credit can lead to boom/bust cycles, such as the recent financial crisis. This highlights how the organization of our monetary system is crucial to stability. One way to achieve this is by separating the unit of account from the medium of exchange and in pre-modern Europe, such a separation existed. This new volume examines this idea of monetary separation and this history of monetary arrangements in the North and Baltic Seas region, from the Hanseatic League onwards. This book provides a theoretical analysis of four historical cases in the Baltic and North Seas region, with a view to examining evolution of monetary arrangements from a new monetary economics perspective. Since the objective exhange value of money (its purchasing power), reflects subjective individual valuations of commodities, the author assesses these historical cases by means of exchange rates. Using theories from new monetary economics , the book explores how the units of account and their media of exchange evolved as social conventions, and offers new insight into the separation between the two. Through this exploration, it puts forward that money is a social institution, a clearing device for the settlement of accounts, and so the value of money, or a separate unit of account, ultimately results from the size of its network of users. The History of Money and Monetary Arrangements offers a highly original new insight into monetary arrangments as an evolutionary process. It will be of great interest to an international audience of scholars and students, including those with an interest in economic history, evolutionary economics and new monetary economics.
Developed from the author's course on Monte Carlo simulation at Brown University, Monte Carlo Simulation with Applications to Finance provides a self-contained introduction to Monte Carlo methods in financial engineering. It is suitable for advanced undergraduate and graduate students taking a one-semester course or for practitioners in the financial industry. The author first presents the necessary mathematical tools for simulation, arbitrary free option pricing, and the basic implementation of Monte Carlo schemes. He then describes variance reduction techniques, including control variates, stratification, conditioning, importance sampling, and cross-entropy. The text concludes with stochastic calculus and the simulation of diffusion processes. Only requiring some familiarity with probability and statistics, the book keeps much of the mathematics at an informal level and avoids technical measure-theoretic jargon to provide a practical understanding of the basics. It includes a large number of examples as well as MATLAB (R) coding exercises that are designed in a progressive manner so that no prior experience with MATLAB is needed.
"Bayesian Econometrics" illustrates the scope and diversity of modern applications, reviews some recent advances, and highlights many desirable aspects of inference and computations. It begins with an historical overview by Arnold Zellner who describes key contributions to development and makes predictions for future directions. In the second paper, Giordani and Kohn makes suggestions for improving Markov chain Monte Carlo computational strategies. The remainder of the book is categorized according to microeconometric and time-series modeling. Models considered include an endogenous selection ordered probit model, a censored treatment-response model, equilibrium job search models and various other types. These are used to study a variety of applications for example dental insurance and care, educational attainment, voter opinions and the marketing share of various brands and an aggregate cross-section production function. Models and topics considered include the potential problem of improper posterior densities in a variety of dynamic models, selection and averaging for forecasting with vector autoregressions, a consumption capital-asset pricing model and various others. Applications involve U.S. macroeconomic variables, exchange rates, an investigation of purchasing power parity, data from London Metals Exchange, international automobile production data, and data from the Asian stock market.
Nonlinear models have been used extensively in the areas of economics and finance. Recent literature on the topic has shown that a large number of series exhibit nonlinear dynamics as opposed to the alternative--linear dynamics. Incorporating these concepts involves deriving and estimating nonlinear time series models, and these have typically taken the form of Threshold Autoregression (TAR) models, Exponential Smooth Transition (ESTAR) models, and Markov Switching (MS) models, among several others. This edited volume provides a timely overview of nonlinear estimation techniques, offering new methods and insights into nonlinear time series analysis. It features cutting-edge research from leading academics in economics, finance, and business management, and will focus on such topics as Zero-Information-Limit-Conditions, using Markov Switching Models to analyze economics series, and how best to distinguish between competing nonlinear models. Principles and techniques in this book will appeal to econometricians, finance professors teaching quantitative finance, researchers, and graduate students interested in learning how to apply advances in nonlinear time series modeling to solve complex problems in economics and finance.
Davidson and MacKinnon have written an outstanding textbook for graduates in econometrics, covering both basic and advanced topics and using geometrical proofs throughout for clarity of exposition. The book offers a unified theoretical perspective, and emphasizes the practical applications of modern theory.
The book provides a description of the process of health economic evaluation and modelling for cost-effectiveness analysis, particularly from the perspective of a Bayesian statistical approach. Some relevant theory and introductory concepts are presented using practical examples and two running case studies. The book also describes in detail how to perform health economic evaluations using the R package BCEA (Bayesian Cost-Effectiveness Analysis). BCEA can be used to post-process the results of a Bayesian cost-effectiveness model and perform advanced analyses producing standardised and highly customisable outputs. It presents all the features of the package, including its many functions and their practical application, as well as its user-friendly web interface. The book is a valuable resource for statisticians and practitioners working in the field of health economics wanting to simplify and standardise their workflow, for example in the preparation of dossiers in support of marketing authorisation, or academic and scientific publications.
This volume deals with two complementary topics. On one hand the book deals with the problem of determining the the probability distribution of a positive compound random variable, a problem which appears in the banking and insurance industries, in many areas of operational research and in reliability problems in the engineering sciences. On the other hand, the methodology proposed to solve such problems, which is based on an application of the maximum entropy method to invert the Laplace transform of the distributions, can be applied to many other problems. The book contains applications to a large variety of problems, including the problem of dependence of the sample data used to estimate empirically the Laplace transform of the random variable. Contents Introduction Frequency models Individual severity models Some detailed examples Some traditional approaches to the aggregation problem Laplace transforms and fractional moment problems The standard maximum entropy method Extensions of the method of maximum entropy Superresolution in maxentropic Laplace transform inversion Sample data dependence Disentangling frequencies and decompounding losses Computations using the maxentropic density Review of statistical procedures
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
This text takes an integrated approach, and places emphasis on modeling and the application of pure methods rather than statistical techniques. This emphasis allows readers to learn how to solve business problems, not mathematical equations, and prepares them for their role as decision makers. All models and analyses in the book use Excel so readers make decisions without having to complete difficult calculations. The book is also accompanied by KaddStat, an easy-to-use add-in to Excel, which makes it easier to run complex statistical tests on Excel.
This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.
This volume compares strategic properties of the leading macroeconometric models of the United States. It summarizes the work of an ongoing seminar supported by the National Science Foundation and chaired by Lawrence R. Klein of the University of Pennsylvania. The Seminar meets three times annually. Comparisons are made across models for such characteristics as conventional multipliers (fiscal, monetary, and supply side shocks), J-curve response to dollar depreciation, and forecast performance under consistent assumptions. There are in-depth comparisons of some models and investigation of use of high frequency information to improve forecasts. There are also analyses of the sources of forecast error. The core structures of models, especially their ISLM cores, are compared. The volume contains one chapter on comparison across models of different developing countries. In addition to the contributions by participating model builders who meet regularly, the book contains critical appraisals by outsiders. The contributors include many distinguished economists in model use and analysis. Many are operators of the countries best known modelling facilities. The introduction was written by Lawrence R. Klein, winner of the Nobel Prize in Economic Science in 1980, for his work in construction and use of econometric models.
This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.
This book provides advanced theoretical and applied tools for the implementation of modern micro-econometric techniques in evidence-based program evaluation for the social sciences. The author presents a comprehensive toolbox for designing rigorous and effective ex-post program evaluation using the statistical software package Stata. For each method, a statistical presentation is developed, followed by a practical estimation of the treatment effects. By using both real and simulated data, readers will become familiar with evaluation techniques, such as regression-adjustment, matching, difference-in-differences, instrumental-variables and regression-discontinuity-design and are given practical guidelines for selecting and applying suitable methods for specific policy contexts.
This book examines conventional time series in the context of stationary data prior to a discussion of cointegration, with a focus on multivariate models. The authors provide a detailed and extensive study of impulse responses and forecasting in the stationary and non-stationary context, considering small sample correction, volatility and the impact of different orders of integration. Models with expectations are considered along with alternate methods such as Singular Spectrum Analysis (SSA), the Kalman Filter and Structural Time Series, all in relation to cointegration. Using single equations methods to develop topics, and as examples of the notion of cointegration, Burke, Hunter, and Canepa provide direction and guidance to the now vast literature facing students and graduate economists.
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences. |
![]() ![]() You may like...
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
Paperback
R2,579
Discovery Miles 25 790
Energy Use Policies and Carbon Pricing…
Arun Advani, Samuela Bassi, …
Paperback
R351
Discovery Miles 3 510
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Statistics for Business & Economics…
James McClave, P Benson, …
Paperback
R2,601
Discovery Miles 26 010
Tax Policy and Uncertainty - Modelling…
Christopher Ball, John Creedy, …
Hardcover
R2,791
Discovery Miles 27 910
Handbook of Research Methods and…
Nigar Hashimzade, Michael A. Thornton
Hardcover
R8,286
Discovery Miles 82 860
Contemporary Project Management…
Timothy Kloppenborg, Kathryn Wells, …
Paperback
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
|