![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > General
Two central problems in the pure theory of economic growth are analysed in this monograph: 1) the dynamic laws governing the economic growth processes, 2) the kinematic and geometric properties of the set of solutions to the dynamic systems. With allegiance to rigor and the emphasis on the theoretical fundamentals of prototype mathematical growth models, the treatise is written in the theorem-proof style. To keep the exposition orderly and as smooth as possible, the economic analysis has been separated from the purely mathematical issues, and hence the monograph is organized in two books. Regarding the scope and content of the two books, an "Introduction and Over view" has been prepared to offer both motivation and a brief account. The introduc tion is especially designed to give a recapitulation of the mathematical theory and results presented in Book II, which are used as the unifying mathematical framework in the analysis and exposition of the different economic growth models in Book I. Economists would probably prefer to go directly to Book I and proceed by consult ing the mathematical theorems of Book II in confirming the economic theorems in Book I. Thereby, both the independence and interdependence of the economic and mathematical argumentations are respected.
Gini's mean difference (GMD) was first introduced by Corrado Gini in 1912 as an alternative measure of variability. GMD and the parameters which are derived from it (such as the Gini coefficient or the concentration ratio) have been in use in the area of income distribution for almost a century. In practice, the use of GMD as a measure of variability is justified whenever the investigator is not ready to impose, without questioning, the convenient world of normality. This makes the GMD of critical importance in the complex research of statisticians, economists, econometricians, and policy makers. This book focuses on imitating analyses that are based on variance by replacing variance with the GMD and its variants. In this way, the text showcases how almost everything that can be done with the variance as a measure of variability, can be replicated by using Gini. Beyond this, there are marked benefits to utilizing Gini as opposed to other methods. One of the advantages of using Gini methodology is that it provides a unified system that enables the user to learn about various aspects of the underlying distribution. It also provides a systematic method and a unified terminology. Using Gini methodology can reduce the risk of imposing assumptions that are not supported by the data on the model. With these benefits in mind the text uses the covariance-based approach, though applications to other approaches are mentioned as well.
Financial Asset Pricing Theory offers a comprehensive overview of the classic and the current research in theoretical asset pricing. Asset pricing is developed around the concept of a state-price deflator which relates the price of any asset to its future (risky) dividends and thus incorporates how to adjust for both time and risk in asset valuation. The willingness of any utility-maximizing investor to shift consumption over time defines a state-price deflator which provides a link between optimal consumption and asset prices that leads to the Consumption-based Capital Asset Pricing Model (CCAPM). A simple version of the CCAPM cannot explain various stylized asset pricing facts, but these asset pricing 'puzzles' can be resolved by a number of recent extensions involving habit formation, recursive utility, multiple consumption goods, and long-run consumption risks. Other valuation techniques and modelling approaches (such as factor models, term structure models, risk-neutral valuation, and option pricing models) are explained and related to state-price deflators. The book will serve as a textbook for an advanced course in theoretical financial economics in a PhD or a quantitative Master of Science program. It will also be a useful reference book for researchers and finance professionals. The presentation in the book balances formal mathematical modelling and economic intuition and understanding. Both discrete-time and continuous-time models are covered. The necessary concepts and techniques concerning stochastic processes are carefully explained in a separate chapter so that only limited previous exposure to dynamic finance models is required.
This book contains a systematic analysis of allocation rules related to cost and surplus sharing problems. Broadly speaking, it examines various types of rules for allocating a common monetary value (cost) between individual members of a group (or network) when the characteristics of the problem are somehow objectively given. Without being an advanced text it o?ers a comprehensive mathematical analysis of a series of well-known allocation rules. The aim is to provide an overview and synthesis of current kno- edge concerning cost and surplus sharing methods. The text is accompanied by a description of several practical cases and numerous examples designed to make the theoretical results easily comprehensible for both students and practitioners alike. The book is based on a series of lectures given at the University of Copenhagen and Copenhagen Business School for graduate students joining the math/econ program. I am indebted to numerous colleagues, conference participants and s- dents who during the years have shaped my approach and interests through collaboration,commentsandquestionsthatweregreatlyinspiring.Inparti- lar, I would like to thank Hans Keiding, Maurice Koster, Tobias Markeprand, Juan D. Moreno-Ternero, Herv' e Moulin, Bezalel Peleg, Lars Thorlund- Petersen, Jorgen Tind, Mich Tvede and Lars Peter Osterdal.
This is an unusual book because it contains a great deal of formulas. Hence it is a blend of monograph, textbook, and handbook.It is intended for students and researchers who need quick access to useful formulas appearing in the linear regression model and related matrix theory. This is not a regular textbook - this is supporting material for courses given in linear statistical models. Such courses are extremely common at universities with quantitative statistical analysis programs."
A lot of economic problems can be formulated as constrained optimizations and equilibration of their solutions. Various mathematical theories have been supplying economists with indispensable machineries for these problems arising in economic theory. Conversely, mathematicians have been stimulated by various mathematical difficulties raised by economic theories. The series is designed to bring together those mathematicians who are seriously interested in getting new challenging stimuli from economic theories with those economists who are seeking effective mathematical tools for their research.
Stochastic Averaging and Extremum Seeking treats methods inspired by attempts to understand the seemingly non-mathematical question of bacterial chemotaxis and their application in other environments. The text presents significant generalizations on existing stochastic averaging theory developed from scratch and necessitated by the need to avoid violation of previous theoretical assumptions by algorithms which are otherwise effective in treating these systems. Coverage is given to four main topics. Stochastic averaging theorems are developed for the analysis of continuous-time nonlinear systems with random forcing, removing prior restrictions on nonlinearity growth and on the finiteness of the time interval. The new stochastic averaging theorems are usable not only as approximation tools but also for providing stability guarantees. Stochastic extremum-seeking algorithms are introduced for optimization of systems without available models. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms for non-cooperative/adversarial games is described. The analysis of their convergence to Nash equilibria is provided. The algorithms are illustrated on models of economic competition and on problems of the deployment of teams of robotic vehicles. Bacterial locomotion, such as chemotaxis in E. coli, is explored with the aim of identifying two simple feedback laws for climbing nutrient gradients. Stochastic extremum seeking is shown to be a biologically-plausible interpretation for chemotaxis. For the same chemotaxis-inspired stochastic feedback laws, the book also provides a detailed analysis of convergence for models of nonholonomic robotic vehicles operating in GPS-denied environments. The book contains block diagrams and several simulation examples, including examples arising from bacterial locomotion, multi-agent robotic systems, and economic market models. Stochastic Averaging and Extremum Seeking will be informative for control engineers from backgrounds in electrical, mechanical, chemical and aerospace engineering and to applied mathematicians. Economics researchers, biologists, biophysicists and roboticists will find the applications examples instructive.
This book, which was first published in 1980, is concerned with one particular branch of growth theory, namely descriptive growth theory. It is typically assumed in growth theory that both the factors and goods market are perfectly competitive. In particular this implies amongst other things that the reward to each factor is identical in each sector of the economy. In this book the assumption of identical factor rewards is relaxed and the implications of an intersectoral wage differential for economic growth are analysed. There is also some discussion on the short-term and long-run effects of minimum wage legislation on growth. This book will serve as key reading for students of economics.
In production and service sectors we often come across situations where females remain largely overshadowed by males both in terms of wages and productivity. Men are generally assigned jobs that require more physical work while the 'less' strenuous job is allocated to the females. However, the gender dimension of labor process in the service sector in India has remained relatively unexplored. There are certain activities in the service sector where females are more suitable than males. The service sector activities are usually divided into OAE and Establishments. In this work, an attempt has been made to segregate the productivity of females compared to that of males on the basis of both partial and complete separability models. An estimate has also been made of the female labor supply function. The results present a downward trend for female participation both in Own Account Enterprises (OAE) and Establishment. The higher the female shadow wage the lower their supply. This lends support to the supposition that female labor participation is a type of "distress supply" rather than a positive indicator of women's empowerment. Analysis of the National Sample Service Organization data indicates that in all the sectors women are generally paid less than men. A micro-econometric study reveals that even in firms that employ solely female labor, incidence of full-time labor is deplorably poor. It is this feature that results in women workers' lower earnings and their deprivation.
This volume is centered around the issue of market design and resulting market dynamics. The economic crisis of 2007-2009 has once again highlighted the importance of a proper design of market protocols and institutional details for economic dynamics and macroeconomics. Papers in this volume capture institutional details of particular markets, behavioral details of agents' decision making as well as spillovers between markets and effects to the macroeconomy. Computational methods are used to replicate and understand market dynamics emerging from interaction of heterogeneous agents, and to develop models that have predictive power for complex market dynamics. Finally treatments of overlapping generations models and differential games with heterogeneous actors are provided.
This is an introduction to time series that emphasizes methods and analysis of data sets. The logic and tools of model-building for stationary and non-stationary time series are developed and numerous exercises, many of which make use of the included computer package, provide the reader with ample opportunity to develop skills. Statisticians and students will learn the latest methods in time series and forecasting, along with modern computational models and algorithms.
First published in 1996, Dynamic Disequilibrium Modeling presents some surveys and developments in dynamic disequilibrium and continuous time econometric modeling along with related research from associated fields. Specific areas covered include applications in business cycles and growth, tests for nonlinearity, rationing and disequilibrium dynamics, and demographic and international applications. The contents of this volume comprise the proceedings of the ninth conference in The International Symposia in Economic Theory and Econometrics series under the general editorship of William Barnett. The proceedings volume includes the most important papers presented at a conference held at the University of Munich on August 31-September 4, 1993.
This collection brings together important contributions by leading econometricians on (i) parametric approaches to qualitative and sample selection models, (ii) nonparametric and semi-parametric approaches to qualitative and sample selection models, and (iii) nonlinear estimation of cross-sectional and time series models. The advances achieved here can have important bearing on the choice of methods and analytical techniques in applied research.
Bringing together a collection of previously published work, this book provides a discussion of major considerations relating to the construction of econometric models that work well to explain economic phenomena, predict future outcomes and be useful for policy-making. Analytical relations between dynamic econometric structural models and empirical time series MVARMA, VAR, transfer function, and univariate ARIMA models are established with important application for model-checking and model construction. The theory and applications of these procedures to a variety of econometric modeling and forecasting problems as well as Bayesian and non-Bayesian testing, shrinkage estimation and forecasting procedures are also presented and applied. Finally, attention is focused on the effects of disaggregation on forecasting precision and the Marshallian Macroeconomic Model that features demand, supply and entry equations for major sectors of economies is analysed and described. This volume will prove invaluable to professionals, academics and students alike.
A lot of economic problems can be formulated as constrained optimizations and equilibration of their solutions. Various mathematical theories have been supplying economists with indispensable machineries for these problems arising in economic theory. Conversely, mathematicians have been stimulated by various mathematical difficulties raised by economic theories. The series is designed to bring together those mathematicians who are seriously interested in getting new challenging stimuli from economic theories with those economists who are seeking effective mathematical tools for their research.
From Catastrophe to Chaos: A General Theory of Economic Discontinuities presents and unusual perspective on economics and economic analysis. Current economic theory largely depends upon assuming that the world is fundamentally continuous. However, an increasing amount of economic research has been done using approaches that allow for discontinuities such as catastrophe theory, chaos theory, synergetics, and fractal geometry. The spread of such approaches across a variety of disciplines of thought has constituted a virtual intellectual revolution in recent years. This book reviews the applications of these approaches in various subdisciplines of economics and draws upon past economic thinkers to develop an integrated view of economics as a whole from the perspective of inherent discontinuity.
1 DATA ENVELOPMENT ANALYSIS Data Envelopment Analysis (DEA) was initially developed as a method for assessing the comparative efficiencies of organisational units such as the branches of a bank, schools, hospital departments or restaurants. The key in each case is that they perform feature which makes the units comparable the same function in terms of the kinds of resource they use and the types of output they produce. For example all bank branches to be compared would typically use staff and capital assets to effect income generating activities such as advancing loans, selling financial products and carrying out banking transactions on behalf of their clients. The efficiencies assessed in this context by DEA are intended to reflect the scope for resource conservation at the unit being assessed without detriment to its outputs, or alternatively, the scope for output augmentation without additional resources. The efficiencies assessed are comparative or relative because they reflect scope for resource conservation or output augmentation at one unit relative to other comparable benchmark units rather than in some absolute sense. We resort to relative rather than absolute efficiencies because in most practical contexts we lack sufficient information to derive the superior measures of absolute efficiency. DEA was initiated by Charnes Cooper and Rhodes in 1978 in their seminal paper Chames et al. (1978). The paper operationalised and extended by means of linear programming production economics concepts of empirical efficiency put forth some twenty years earlier by Farrell (1957).
Studies in Global Econometrics is a collection of essays on the use of cross-country data based on purchasing power parities. The two major applications are the development over time of per capital gross domestic products, (including that of their inequalities among countries and regions) and the fitting of cross-country demand equations for broad groups of consumer goods. The introductory chapter provides highlights of the author's work as relating to these developments. One of the main topics of the work is a system of demand equations for broad groups of consumer goods fitted by means of cross-country data. These data are from the International Comparison Program, which provides PPP-based figures for a number of years and countries. Similar data are used for the measurement of the dispersion of national per capita incomes between and within seven geographic regions.
This book provides an overview of three generations of spatial econometric models: models based on cross-sectional data, static models based on spatial panels and dynamic spatial panel data models. The book not only presents different model specifications and their corresponding estimators, but also critically discusses the purposes for which these models can be used and how their results should be interpreted.
Each chapter of Macroeconometrics is written by respected econometricians in order to provide useful information and perspectives for those who wish to apply econometrics in macroeconomics. The chapters are all written with clear methodological perspectives, making the virtues and limitations of particular econometric approaches accessible to a general readership familiar with applied macroeconomics. The real tensions in macroeconometrics are revealed by the critical comments from different econometricians, having an alternative perspective, which follow each chapter.
Over the last decade or so, applied general equilibrium models have rapidly become a major tool for policy advice on issues regarding allocation and efficiency, most notably taxes and tariffs. This reflects the power of the general equilibrium approach to allocative questions and the capability of today's applied models to come up with realistic answers. However, it by no means implies that the theoretical, practical and empirical problems faced by researchers in applied modelling have all been solved in a satisfactory way. Rather, a promising field of research has been opened up, inviting theorists and practitioners to further explore and exploit its potential. The state of the art in applied general equilibrium modelling is reflected in this volume. The introductory Chapter (Part I) evaluates the use of economic modelling to address policy questions, and discusses the advantages and disadvantages of applied general equilibrium models. Three substantive issues are dealt with in Chapters 2-8: Tax Reform and Capital (Part II), Intertemporal Aspects and Expectations (Part III), and Taxes and the Labour Market (Part IV). While all parts contain results relevant for economic policy, it is clear that theory and applications for these areas are in different stages of development. We hope that this book will bring inspiration, insight and information to researchers, students and policy advisors.
This handbook covers DEA topics that are extensively used and solidly based. The purpose of the handbook is to (1) describe and elucidate the state of the field and (2), where appropriate, extend the frontier of DEA research. It defines the state-of-the-art of DEA methodology and its uses. This handbook is intended to represent a milestone in the progression of DEA. Written by experts, who are generally major contributors to the topics to be covered, it includes a comprehensive review and discussion of basic DEA models, which, in the present issue extensions to the basic DEA methods, and a collection of DEA applications in the areas of banking, engineering, health care, and services. The handbook's chapters are organized into two categories: (i) basic DEA models, concepts, and their extensions, and (ii) DEA applications. First edition contributors have returned to update their work. The second edition includes updated versions of selected first edition chapters. New chapters have been added on: different approaches with no need for a priori choices of weights (called multipliers) that reflect meaningful trade-offs, construction of static and dynamic DEA technologies, slacks-based model and its extensions, DEA models for DMUs that have internal structures network DEA that can be used for measuring supply chain operations, Selection of DEA applications in the service sector with a focus on building a conceptual framework, research design and interpreting results. "
Figure 1. 1. Map of Great Britain at two different scale levels. (a) Counties, (b)Regions. '-. " Figure 1. 2. Two alternative aggregations of the Italian provincie in 32 larger areas 4 CHAPTER 1 d . , b) Figure 1. 3 Percentage of votes of the Communist Party in the 1987 Italian political elections (a) and percentage of population over 75 years (b) in 1981 Italian Census in 32 polling districts. The polling districts with values above the average are shaded. Figure 1. 4: First order neighbours (a) and second order neighbours (b) of a reference area. INTRODUCTION 5 While there are several other problems relating to the analysis of areal data, the problem of estimating a spatial correlO!J'am merits special attention. The concept of the correlogram has been borrowed in the spatial literature from the time series analysis. Figure l. 4. a shows the first-order neighbours of a reference area, while Figure 1. 4. b displays the second-order neighbours of the same area. Higher-order neighbours can be defined in a similar fashion. While it is clear that the dependence is strongest between immediate neighbouring areas a certain degree of dependence may be present among higher-order neighbours. This has been shown to be an alternative way of look ing at the sca le problem (Cliff and Ord, 1981, p. l 23). However, unlike the case of a time series where each observation depends only on past observations, here dependence extends in all directions.
Observers and Macroeconomic Systems is concerned with the computational aspects of using a control-theoretic approach to the analysis of dynamic macroeconomic systems. The focus is on using a separate model for the development of the control policies. In particular, it uses the observer-based approach whereby the separate model learns to behave in a similar manner to the economic system through output-injections. The book shows how this approach can be used to learn the forward-looking behaviour of economic actors which is a distinguishing feature of dynamic macroeconomic models. It also shows how it can be used in conjunction with low-order models to undertake policy analysis with a large practical econometric model. This overcomes some of the computational problems arising from using just the large econometric models to compute optimal policy trajectories. The work also develops visual simulation software tools that can be used for policy analysis with dynamic macroeconomic systems.
Modelling and Forecasting Financial Data brings together a coherent and accessible set of chapters on recent research results on this topic. To make such methods readily useful in practice, the contributors to this volume have agreed to make available to readers upon request all computer programs used to implement the methods discussed in their respective chapters. Modelling and Forecasting Financial Data is a valuable resource for researchers and graduate students studying complex systems in finance, biology, and physics, as well as those applying such methods to nonlinear time series analysis and signal processing. |
You may like...
Anatomy, Physiology, & Disease - An…
Bruce Colbert, Jeff Ankney, …
Paperback
R2,893
Discovery Miles 28 930
Occupational Health and Safety…
Elsjebe Mostert, Hester Nienaber
Paperback
R282
Discovery Miles 2 820
Courage for the Forward Path - Wisdom…
Journey Canada, Graeme Lauber
Hardcover
R784
Discovery Miles 7 840
Gunboat Diplomacy and the Bomb - Nuclear…
Eric H. Arnett
Hardcover
|