![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > General
Nonlinear and nonnormal filters are introduced and developed. Traditional nonlinear filters such as the extended Kalman filter and the Gaussian sum filter give biased filtering estimates, and therefore several nonlinear and nonnormal filters have been derived from the underlying probability density functions. The density-based nonlinear filters introduced in this book utilize numerical integration, Monte-Carlo integration with importance sampling or rejection sampling and the obtained filtering estimates are asymptotically unbiased and efficient. By Monte-Carlo simulation studies, all the nonlinear filters are compared. Finally, as an empirical application, consumption functions based on the rational expectation model are estimated for the nonlinear filters, where US, UK and Japan economies are compared.
This book/software package divulges the combined knowledge of a whole international community of Mathematica users - from the fields of economics, finance, investments, quantitative business and operations research. The 23 contributors - all experts in their fields - take full advantage of the latest updates of Mathematica in their presentations and equip both current and prospective users with tools for professional, research and educational projects. The real-world and self-contained models provided are applicable to an extensive range of contemporary problems. The DOS disk contains Notebooks and packages which are also available online from the TELOS site.
This book contains some of the results from the research project "Demand for Food in the Nordic Countries," which was initiated in 1988 by Professor Olof Bolin of the Agricultural University in Ultuna, Sweden and by Professor Karl Iohan Weckman, of the University of Helsinki, Finland. A pilot study was carried out by Bengt Assarsson, which in 1989 led to a successful application for a research grant from the NKJ (The Nordic Contact Body for Agricultural Research) through the national research councils for agricultural research in Denmark, Finland, Norway and Sweden. We are very grateful to Olof Bolin and Karl Iohan Weckman, without whom this project would not have come about, and to the national research councils in the Nordic countries for the generous financial support we have received for this project. We have received comments and suggestions from many colleagues, and this has improved our work substantially. At the start of the project a reference group was formed, consisting of Professor Olof Bolin, Professor Anders Klevmarken, Agr. lie. Gert Aage Nielsen, Professor Karl Iohan Weckman and Cando oecon. Per Halvor Vale. Gert Aage Nielsen left the group early in the project for a position in Landbanken, and was replaced by Professor Lars Otto, while Per Halvor Vale soon joined the research staff. The reference group has given us useful suggestions and encouraged us in our work. Weare very grateful to them.
Studies in Global Econometrics is a collection of essays on the use of cross-country data based on purchasing power parities. The two major applications are the development over time of per capital gross domestic products, (including that of their inequalities among countries and regions) and the fitting of cross-country demand equations for broad groups of consumer goods. The introductory chapter provides highlights of the author's work as relating to these developments. One of the main topics of the work is a system of demand equations for broad groups of consumer goods fitted by means of cross-country data. These data are from the International Comparison Program, which provides PPP-based figures for a number of years and countries. Similar data are used for the measurement of the dispersion of national per capita incomes between and within seven geographic regions.
A new approach to explaining the existence of firms and markets, focusing on variability and coordination. It stands in contrast to the emphasis on transaction costs, and on monitoring and incentive structures, which are prominent in most of the modern literature in this field. This approach, called the variability approach, allows us to: show why both the need for communication and the coordination costs increase when the division of labor increases; explain why, while the firm relies on direction, the market does not; rigorously formulate the optimum divisionalization problem; better understand the relationship between technology and organization; show why the size' of the firm is limited; and to refine the analysis of whether the existence of a sharable input, or the presence of an external effect leads to the emergence of a firm. The book provides a wealth of insights for students and professionals in economics, business, law and organization.
Each chapter of Macroeconometrics is written by respected econometricians in order to provide useful information and perspectives for those who wish to apply econometrics in macroeconomics. The chapters are all written with clear methodological perspectives, making the virtues and limitations of particular econometric approaches accessible to a general readership familiar with applied macroeconomics. The real tensions in macroeconometrics are revealed by the critical comments from different econometricians, having an alternative perspective, which follow each chapter.
A non-technical introduction to the question of modeling with time-varying parameters, using the beta coefficient from Financial Economics as the main example. After a brief introduction to this coefficient for those not versed in finance, the book presents a number of rather well known tests for constant coefficients and then performs these tests on data from the Stockholm Exchange. The Kalman filter is then introduced and a simple example is used to demonstrate the power of the filter. The filter is then used to estimate the market model with time-varying betas. The book concludes with further examples of how the Kalman filter may be used in estimation models used in analyzing other aspects of finance. Since both the programs and the data used in the book are available for downloading, the book is especially valuable for students and other researchers interested in learning the art of modeling with time varying coefficients.
Capital theory is a cornerstone of modern economics. Its ideas are fundamental for dynamic equilibrium theory and its concepts are applied in many branches of economics like game theory, resource and environmental economics, although this may not be recognized on a first glance. In this monograph, an approach is presented, which allows to derive important results of capital theory in a coherent and readily accessible framework. A special emphasis is given on infinite horizon and overlapping generations economics. Irreversibility of time, or the failure of the market system appear in a different light if an infinite horizon framework is applied. To bridge the gap between pure and applied economic theory, the structure of our theoretical approach is integrated in a computable general equilibrium model.
1. 1 Introduction In economics, one often observes time series that exhibit different patterns of qualitative behavior, both regular and irregular, symmetric and asymmetric. There exist two different perspectives to explain this kind of behavior within the framework of a dynamical model. The traditional belief is that the time evolution of the series can be explained by a linear dynamic model that is exogenously disturbed by a stochastic process. In that case, the observed irregular behavior is explained by the influence of external random shocks which do not necessarily have an economic reason. A more recent theory has evolved in economics that attributes the patterns of change in economic time series to an underlying nonlinear structure, which means that fluctua tions can as well be caused endogenously by the influence of market forces, preference relations, or technological progress. One of the main reasons why nonlinear dynamic models are so interesting to economists is that they are able to produce a great variety of possible dynamic outcomes - from regular predictable behavior to the most complex irregular behavior - rich enough to meet the economists' objectives of modeling. The traditional linear models can only capture a limited number of possi ble dynamic phenomena, which are basically convergence to an equilibrium point, steady oscillations, and unbounded divergence. In any case, for a lin ear system one can write down exactly the solutions to a set of differential or difference equations and classify them."
Migration, commuting, and tourism are prominent phenomena demonstrating the political and economic relevance of the spatial choice behavior of households. The identification of the determinants and effects of the households' location choice is necessary for both entrepreneurial and policy planners who attempt to predict (or regulate) the future demand for location-specific commodities, such as infrastructure, land, or housing, and the supply of labor. Microeconomic studies of the spatial behavior of individuals have typically focused upon the demand for a single, homogeneous, yet location-specific com 2 modity (such as land or housing ) or their supply of labor3 and investigated the formation of location-specific prices and wages in the presence of transportation and migration costs or analyzed the individual-and location-specific character istics triggering spatial rather than quantitative or temporal adjustments. In contrast to many theoretical analyses, empirical studies of the causes or con sequences of individual demand for location-specific commodities have often considered several "brands" of a heterogeneous good that are offered at various locations, are perfect substitutes, and may be produced by varying production 4 technologies. lCf. Alonso (1964) 2Cf. Muth (1969). 3Cf. Sjaastad (1962) and Greenwood (1975)."
The papers collected in this volume are contributions to T.I.Tech./K.E.S. Conference on Nonlinear and Convex Analysis in Economic Theory, which was held at Keio University, July 2-4, 1993. The conference was organized by Tokyo Institute of Technology (T. I. Tech.) and the Keio Economic Society (K. E. S.) , and supported by Nihon Keizai Shimbun Inc .. A lot of economic problems can be formulated as constrained optimiza tions and equilibrations of their solutions. Nonlinear-convex analysis has been supplying economists with indispensable mathematical machineries for these problems arising in economic theory. Conversely, mathematicians working in this discipline of analysis have been stimulated by various mathematical difficulties raised by economic the ories. Although our special emphasis was laid upon "nonlinearity" and "con vexity" in relation with economic theories, we also incorporated stochastic aspects of financial economics in our project taking account of the remark able rapid growth of this discipline during the last decade. The conference was designed to bring together those mathematicians who were seriously interested in getting new challenging stimuli from economic theories with those economists who were seeking for effective mathematical weapons for their researches. Thirty invited talks (six of them were plenary talks) given at the conf- ence were roughly classified under the following six headings : 1) Nonlinear Dynamical Systems and Business Fluctuations, . 2) Fixed Point Theory, 3) Convex Analysis and Optimization, 4) Eigenvalue of Positive Operators, 5) Stochastic Analysis and Financial Market, 6) General Equilibrium Analysis.
As a new type of technique, simplicial methods have yielded extremely important contributions toward solutions of a system of nonlinear equations. Theoretical investigations and numerical tests have shown that the performance of simplicial methods depends critically on the triangulations underlying them. This monograph describes some recent developments in triangulations and simplicial methods. It includes the D1-triangulation and its applications to simplicial methods. As a result, efficiency of simplicial methods has been improved significantly. Thus more effective simplicial methods have been developed.
As large physical capital stock projects need long periods to be built, a time-to-build specification is incorporated in factor demand models. Time-to-build and adjustment costs dynamics are identified since by the first moving average dynamics, whereas by the latter autoregressive dynamics are induced. Empirical evidence for time-to-build is obtained from data from the Dutch construction industry and by the estimation result from the manufacturing industry of six OECD countries.
This study is a revised version of my doctoral dissertation at the Economics Department of the University of Munich. I want to take the opportunity to express my gratitude to some people who have helped me in my work. My greatest thanks go to the supervisor of this dissertation, Professor Claude Billinger. Bis ideas have formed the basis of my work. Be permanently sup ported it with a host of ideas, criticism and encouragement. Furthermore, he provided a stimulating research environment at SEMECON. This study would not have been possible in this form without the help of my present and former colleagues at SEMECON. I am indebted to Rudolf Kohne-Volland, Monika Sebold-Bender and Ulrich Woitek for providing soft ware and guidance for the data analysis. Discussions with them and with Thilo Weser have helped me to take many hurdles, particularly in the early stages of the project. My sincere thanks go to them all. I had the opportunity to present a former version of my growth model at a workshop of Professor Klaus Zimmermann. I want to thank all the parti cipants for their helpful comments. I also acknowledge critical and constructive comments from an anonymous referee. Table of Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Part I. Methodology 1. Importance of Stylized Facts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.1 Limitations of statistical testing. . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2 Evaluating economic models. . . . . . . . . . . . . . . . . . .. . . . 11 . . . . . . 2. Further Methodological Issues . . . . . . . . . . . . . . . . . .. . . . 13 . . . . . ."
This book brings together a wide range of topics and perspectives in the growing field of Classification and related methods of Exploratory and Multivariate Data Analysis. It gives a broad view on the state ofthe art, useful for those in the scientific community who gather data and seek tools for analyzing and interpreting large sets of data. As it presents a wide field of applications, this book is not only of interest for data analysts, mathematicians and statisticians, but also for scientists from many areas and disciplines concerned with real data, e. g. , medicine, biology, astronomy, image analysis, pattern recognition, social sciences, psychology, marketing, etc. It contains 79 invited or selected and refereed papers presented during the Fourth Bi- ennial Conference of the International Federation of Classification Societies (IFCS'93) held in Paris. Previous conferences were held at Aachen (Germany), Charlottesville (USA) and Edinburgh (U. K. ). The conference at Paris emerged from the elose coop- eration between the eight members of the IFCS: British Classification Society (BCS), Classification Society of North America (CSNA), Gesellschaft fur Klassifikation (GfKl), J apanese Classification Society (J CS), Jugoslovenska Sekcija za Klasifikacije (JSK), Societe Francophone de Classification (SFC), Societa. Italiana di Statistica (SIS), Vereniging voor Ordinatie en Classificatie (VOC), and was organized by INRIA ("Institut National de Recherche en Informatique et en Automatique"), Rocquencourt and the "Ecole Nationale Superieure des Telecommuni- cations," Paris.
This book presents a review of recent developments in the theory and construction of index numbers using the stochastic approach, demonstrating the versatility of this approach in handling various index number problems within a single conceptual framework. It also contains a brief, but complete, review of the existing approaches to index numbers with illustrative numerical examples.;The stochastic approach considers the index number problem as a signal extraction problem. The strength and reliability of the signal extracted from price and quantity changes for different commodities depends on the messages received and the information content of the messages. The most important applications of the new approach are to be found in the context of measuring rate of inflation and fixed and chain base index numbers for temporal comparisons and for spatial inter-country comparisons - the latter generally require special index number formulae that result in transitive and base invariant comparisons.
'An authoritative survey with exciting new insights of special interest to economists and econometricians who analyse intertemporal and interspatial price relationships.' - Professor Angus Maddison, Groningen University This book presents a comprehensive review of recent developments in the theory and construction of index numbers using the stochastic approach, demonstrating the versatility of this approach in handling various index number problems within a single conceptual framework. It also contains a brief, but complete, review of the existing approaches to index numbers with illustrative numerical examples. The stochastic approach considers the index number problem as a signal extraction problem. The strength and reliability of the signal extracted from price and quantity changes for different commodities depends upon the messages received and the information content of the messages. The most important applications of the new approach are to be found in the context of measuring rate of inflation; fixed and chain base index numbers for temporal comparisons and for spatial intercountry comparisons; the latter generally require special index number formulae that result in transitive and base invariant comparisons.
Two central problems in the pure theory of economic growth are analysed in this monograph: 1) the dynamic laws governing the economic growth processes, 2) the kinematic and geometric properties of the set of solutions to the dynamic systems. With allegiance to rigor and the emphasis on the theoretical fundamentals of prototype mathematical growth models, the treatise is written in the theorem-proof style. To keep the exposition orderly and as smooth as possible, the economic analysis has been separated from the purely mathematical issues, and hence the monograph is organized in two books. Regarding the scope and content of the two books, an "Introduction and Over view" has been prepared to offer both motivation and a brief account. The introduc tion is especially designed to give a recapitulation of the mathematical theory and results presented in Book II, which are used as the unifying mathematical framework in the analysis and exposition of the different economic growth models in Book I. Economists would probably prefer to go directly to Book I and proceed by consult ing the mathematical theorems of Book II in confirming the economic theorems in Book I. Thereby, both the independence and interdependence of the economic and mathematical argumentations are respected."
'This most commendable volume brings together a set of papers which permits ready access to the means of estimating quantitative relationships using cointegration and error correction procedures. Providing the data to show fully the basis for calculation, this approach is an excellent perception of the needs of senior undergraduates and graduate students.' - Professor W.P. Hogan, The University of Sydney Applied economists, with modest econometric background, are now desperately looking for expository literature on the unit roots and cointegration techniques. This volume of expository essays is written for them. It explains in a simple style various tests for the existence of unit roots and how to estimate cointegration relationships. Original data are given to enable easy replications. Limitations of some existing unit root tests are also discussed.
JEAN-FRANQOIS MERTENS This book presents a systematic exposition of the use of game theoretic methods in general equilibrium analysis. Clearly the first such use was by Arrow and Debreu, with the "birth" of general equi librium theory itself, in using Nash's existence theorem (or a generalization) to prove the existence of a competitive equilibrium. But this use appeared possibly to be merely tech nical, borrowing some tools for proving a theorem. This book stresses the later contributions, were game theoretic concepts were used as such, to explain various aspects of the general equilibrium model. But clearly, each of those later approaches also provides per sea game theoretic proof of the existence of competitive equilibrium. Part A deals with the first such approach: the equality between the set of competitive equilibria of a perfectly competitive (i.e., every trader has negligible market power) economy and the core of the corresponding cooperative game."
New Directions in Computational Economics brings together for the first time a diverse selection of papers, sharing the underlying theme of application of computing technology as a tool for achieving solutions to realistic problems in computational economics and related areas in the environmental, ecological and energy fields. Part I of the volume addresses experimental and computational issues in auction mechanisms, including a survey of recent results for sealed bid auctions. The second contribution uses neural networks as the basis for estimating bid functions for first price sealed bid auctions. Also presented is the smart market' computational mechanism which better matches bids and offers for natural gas. Part II consists of papers that formulate and solve models of economics systems. Amman and Kendrick's paper deals with control models and the computational difficulties that result from nonconvexities. Using goal programming, Nagurney, Thore and Pan formulate spatial resource allocation models to analyze various policy issues. Thompson and Thrall next present a rigorous mathematical analysis of the relationship between efficiency and profitability. The problem of matching uncertain streams of assets and liabilities is solved using stochastic optimization techniques in the following paper in this section. Finally, Part III applies economic concepts to issues in computer science in addition to using computational techniques to solve economic models.
Many models in this volume can be used in solving portfolio problems, in assessing forecasts, in understanding the possible effects of shocks and disturbances.
This book consists of four parts: I. Labour demand and supply, II. Productivity slowdown and innovative activity, III. Disequilibrium and business cycle analysis, and IV. Time series analysis of output and employment. It presents a fine selection of articles in the growing field ofthe empirical analysis of output and employment fluctuations with applications in a micro-econometric or a time-series framework. The time-series literature recently has emphasized the careful testing for stationarity and nonlinearity in the data, and the importance of cointegration theory. An essential part of the papers make use of parametric and non-parametric methods developed in this literature and mostly connect their results to the hysteresis discussion about the existence of fragile equilibria. A second set of macro approaches use the disequilibrium framework that has found so much interest in Europe in recent years. The other papers use newly developed methods for microdata, especially qualitative data or limited dependent variables to study microeconomic models of behaviour that explain labour market and output decisions.
1. 1 Integrating results The empirical study of macroeconomic time series is interesting. It is also difficult and not immediately rewarding. Many statistical and economic issues are involved. The main problems is that these issues are so interrelated that it does not seem sensible to address them one at a time. As soon as one sets about the making of a model of macroeconomic time series one has to choose which problems one will try to tackle oneself and which problems one will leave unresolved or to be solved by others. From a theoretic point of view it can be fruitful to concentrate oneself on only one problem. If one follows this strategy in empirical application one runs a serious risk of making a seemingly interesting model, that is just a corollary of some important mistake in the handling of other problems. Two well known examples of statistical artifacts are the finding of Kuznets "pseudo-waves" of about 20 years in economic activity (Sargent (1979, p. 248)) and the "spurious regression" of macroeconomic time series described in Granger and Newbold (1986, 6. 4). The easiest way to get away with possible mistakes is to admit they may be there in the first place, but that time constraints and unfamiliarity with the solution do not allow the researcher to do something about them. This can be a viable argument." |
![]() ![]() You may like...
Economic Growth and Environmental…
Muhammad Shahbaz, Daniel Balsalobre-Lorente, …
Hardcover
R4,159
Discovery Miles 41 590
Tax Policy and Uncertainty - Modelling…
Christopher Ball, John Creedy, …
Hardcover
R2,672
Discovery Miles 26 720
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Handbook of Research Methods and…
Nigar Hashimzade, Michael A. Thornton
Hardcover
R7,998
Discovery Miles 79 980
|