![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Stochastic Averaging and Extremum Seeking treats methods inspired by attempts to understand the seemingly non-mathematical question of bacterial chemotaxis and their application in other environments. The text presents significant generalizations on existing stochastic averaging theory developed from scratch and necessitated by the need to avoid violation of previous theoretical assumptions by algorithms which are otherwise effective in treating these systems. Coverage is given to four main topics. Stochastic averaging theorems are developed for the analysis of continuous-time nonlinear systems with random forcing, removing prior restrictions on nonlinearity growth and on the finiteness of the time interval. The new stochastic averaging theorems are usable not only as approximation tools but also for providing stability guarantees. Stochastic extremum-seeking algorithms are introduced for optimization of systems without available models. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms for non-cooperative/adversarial games is described. The analysis of their convergence to Nash equilibria is provided. The algorithms are illustrated on models of economic competition and on problems of the deployment of teams of robotic vehicles. Bacterial locomotion, such as chemotaxis in E. coli, is explored with the aim of identifying two simple feedback laws for climbing nutrient gradients. Stochastic extremum seeking is shown to be a biologically-plausible interpretation for chemotaxis. For the same chemotaxis-inspired stochastic feedback laws, the book also provides a detailed analysis of convergence for models of nonholonomic robotic vehicles operating in GPS-denied environments. The book contains block diagrams and several simulation examples, including examples arising from bacterial locomotion, multi-agent robotic systems, and economic market models. Stochastic Averaging and Extremum Seeking will be informative for control engineers from backgrounds in electrical, mechanical, chemical and aerospace engineering and to applied mathematicians. Economics researchers, biologists, biophysicists and roboticists will find the applications examples instructive.
The volatility of financial returns changes over time and, for the last thirty years, Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models have provided the principal means of analyzing, modeling, and monitoring such changes. Taking into account that financial returns typically exhibit heavy tails that is, extreme values can occur from time to time Andrew Harvey's new book shows how a small but radical change in the way GARCH models are formulated leads to a resolution of many of the theoretical problems inherent in the statistical theory. The approach can also be applied to other aspects of volatility, such as those arising from data on the range of returns and the time between trades. Furthermore, the more general class of Dynamic Conditional Score models extends to robust modeling of outliers in the levels of time series and to the treatment of time-varying relationships. As such, there are applications not only to financial data but also to macroeconomic time series and to time series in other disciplines. The statistical theory draws on basic principles of maximum likelihood estimation and, by doing so, leads to an elegant and unified treatment of nonlinear time-series modeling. The practical value of the proposed models is illustrated by fitting them to real data sets."
The volatility of financial returns changes over time and, for the last thirty years, Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models have provided the principal means of analyzing, modeling, and monitoring such changes. Taking into account that financial returns typically exhibit heavy tails that is, extreme values can occur from time to time Andrew Harvey's new book shows how a small but radical change in the way GARCH models are formulated leads to a resolution of many of the theoretical problems inherent in the statistical theory. The approach can also be applied to other aspects of volatility, such as those arising from data on the range of returns and the time between trades. Furthermore, the more general class of Dynamic Conditional Score models extends to robust modeling of outliers in the levels of time series and to the treatment of time-varying relationships. As such, there are applications not only to financial data but also to macroeconomic time series and to time series in other disciplines. The statistical theory draws on basic principles of maximum likelihood estimation and, by doing so, leads to an elegant and unified treatment of nonlinear time-series modeling. The practical value of the proposed models is illustrated by fitting them to real data sets."
Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs). EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current research trends and future perspectives in the enhancement and applicability of EDAs are also covered. The contributions included in the book address topics as relevant as the application of probabilistic-based fitness models, the use of belief propagation algorithms in EDAs and the application of Markov network based EDAs to real-world optimization problems. The book should be of interest to researchers and practitioners from areas such as optimization, evolutionary computation, and machine learning.
The Analytic Hierarchy Process (AHP) is a prominent and powerful tool for making decisions in situations involving multiple objectives. Models, Methods, Concepts and Applications of the Analytic Hierarchy Process, 2nd Edition applies the AHP in order to solve problems focused on the following three themes: economics, the social sciences, and the linking of measurement with human values. For economists, the AHP offers a substantially different approach to dealing with economic problems through ratio scales. Psychologists and political scientists can use the methodology to quantify and derive measurements for intangibles. Meanwhile researchers in the physical and engineering sciences can apply the AHP methods to help resolve the conflicts between hard measurement data and human values. Throughout the book, each of these topics is explored utilizing real life models and examples, relevant to problems in today's society. This new edition has been updated and includes five new chapters that includes discussions of the following: - The eigenvector and why it is necessary - A summary of ongoing research in the Middle East that brings together Israeli and Palestinian scholars to develop concessions from both parties - A look at the Medicare Crisis and how AHP can be used to understand the problems and help develop ideas to solve them.
Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.
Figure 1. 1. Map of Great Britain at two different scale levels. (a) Counties, (b)Regions. '-. " Figure 1. 2. Two alternative aggregations of the Italian provincie in 32 larger areas 4 CHAPTER 1 d . , b) Figure 1. 3 Percentage of votes of the Communist Party in the 1987 Italian political elections (a) and percentage of population over 75 years (b) in 1981 Italian Census in 32 polling districts. The polling districts with values above the average are shaded. Figure 1. 4: First order neighbours (a) and second order neighbours (b) of a reference area. INTRODUCTION 5 While there are several other problems relating to the analysis of areal data, the problem of estimating a spatial correlO!J'am merits special attention. The concept of the correlogram has been borrowed in the spatial literature from the time series analysis. Figure l. 4. a shows the first-order neighbours of a reference area, while Figure 1. 4. b displays the second-order neighbours of the same area. Higher-order neighbours can be defined in a similar fashion. While it is clear that the dependence is strongest between immediate neighbouring areas a certain degree of dependence may be present among higher-order neighbours. This has been shown to be an alternative way of look ing at the sca le problem (Cliff and Ord, 1981, p. l 23). However, unlike the case of a time series where each observation depends only on past observations, here dependence extends in all directions.
Over the last thirty years there has been extensive use of continuous time econometric methods in macroeconomic modelling. This monograph presents a continuous time macroeconometric model of the United Kingdom incorporating stochastic trends. Its development represents a major step forward in continuous time macroeconomic modelling. The book describes the model in detail and, like earlier models, it is designed in such a way as to permit a rigorous mathematical analysis of its steady-state and stability properties, thus providing a valuable check on the capacity of the model to generate plausible long-run behaviour. The model is estimated using newly developed exact Gaussian estimation methods for continuous time econometric models incorporating unobservable stochastic trends. The book also includes discussion of the application of the model to dynamic analysis and forecasting.
Discover the Benefits of Risk Parity Investing Despite recent progress in the theoretical analysis and practical applications of risk parity, many important fundamental questions still need to be answered. Risk Parity Fundamentals uses fundamental, quantitative, and historical analysis to address these issues, such as: What are the macroeconomic dimensions of risk in risk parity portfolios? What are the appropriate risk premiums in a risk parity portfolio? What are market environments in which risk parity might thrive or struggle? What is the role of leverage in a risk parity portfolio? An experienced researcher and portfolio manager who coined the term "risk parity," the author provides investors with a practical understanding of the risk parity investment approach. Investors will gain insight into the merit of risk parity as well as the practical and underlying aspects of risk parity investing.
The Oxford Handbook of Computational Economics and Finance provides a survey of both the foundations of and recent advances in the frontiers of analysis and action. It is both historically and interdisciplinarily rich and also tightly connected to the rise of digital society. It begins with the conventional view of computational economics, including recent algorithmic development in computing rational expectations, volatility, and general equilibrium. It then moves from traditional computing in economics and finance to recent developments in natural computing, including applications of nature-inspired intelligence, genetic programming, swarm intelligence, and fuzzy logic. Also examined are recent developments of network and agent-based computing in economics. How these approaches are applied is examined in chapters on such subjects as trading robots and automated markets. The last part deals with the epistemology of simulation in its trinity form with the integration of simulation, computation, and dynamics. Distinctive is the focus on natural computationalism and the examination of the implications of intelligent machines for the future of computational economics and finance. Not merely individual robots, but whole integrated systems are extending their "immigration" to the world of Homo sapiens, or symbiogenesis.
Mechanism design is the field of economics that treats institutions and procedures as variables that can be selected in order to achieve desired objectives. An important aspect of a mechanism is the communication among its participants that it requires, which complements other design features such as incentives and complexity. A calculus-based theory of communication in mechanisms is developed in this book. The value of a calculus-based approach lies in its familiarity as well as the insight into mechanisms that it provides. Results are developed concerning (i) a first order approach to the construction of mechanisms, (ii) the range of mechanisms that can be used to achieve a given objective, as well as (iii) lower bounds on the required communication.
This 2004 volume offers a broad overview of developments in the theory and applications of state space modeling. With fourteen chapters from twenty-three contributors, it offers a unique synthesis of state space methods and unobserved component models that are important in a wide range of subjects, including economics, finance, environmental science, medicine and engineering. The book is divided into four sections: introductory papers, testing, Bayesian inference and the bootstrap, and applications. It will give those unfamiliar with state space models a flavour of the work being carried out as well as providing experts with valuable state of the art summaries of different topics. Offering a useful reference for all, this accessible volume makes a significant contribution to the literature of this discipline.
Taxpayer compliance is a voluntary activity, and the degree to which the tax system works is affected by taxpayers' knowledge that it is their moral and legal responsibility to pay their taxes. Taxpayers also recognize that they face a lottery in which not all taxpayer noncompliance will ever be detected. In the United States most individuals comply with the tax law, yet the tax gap has grown significantly over time for individual taxpayers. The US Internal Revenue Service attempts to ensure that the minority of taxpayers who are noncompliant pay their fair share with a variety of enforcement tools and penalties. The Causes and Consequences of Income Tax Noncompliance provides a comprehensive summary of the empirical evidence concerning taxpayer noncompliance and presents innovative research with new results on the role of IRS audit and enforcements activities on compliance with federal and state income tax collection. Other issues examined include to what degree taxpayers respond to the threat of civil and criminal enforcement and the important role of the media on taxpayer compliance. This book offers researchers, students, and tax administrators insight into the allocation of taxpayer compliance enforcement and service resources, and suggests policies that will prevent further increases in the tax gap. The book's aggregate data analysis methods have practical applications not only to taxpayer compliance but also to other forms of economic behavior, such as welfare fraud.
Here is an in-depth guide to the most powerful available benchmarking technique for improving service organization performance - Data Envelopment Analysis (DEA). The book outlines DEA as a benchmarking technique, identifies high cost service units, isolates specific changes for elevating performance to the best practice services level providing high quality service at low cost and most important, it guides the improvement process.
This volume uses state of the art models from the frontier of macroeconomics to answer key questions about how the economy functions and how policy should be conducted. The contributions cover a wide range of issues in macroeconomics and macroeconomic policy. They combine high level mathematics with economic analysis, and highlight the need to update our mathematical toolbox in order to understand the increased complexity of the macroeconomic environment. The volume represents hard evidence of high research intensity in many fields of macroeconomics, and warns against interpreting the scope of macroeconomics too narrowly. The mainstream business cycle analysis, based on dynamic stochastic general equilibrium (DSGE) modelling of a particular type, has been criticised for its inability to predict or resolve the recent financial crisis. However, macroeconomic research on financial, information, and learning imperfections had not yet made their way into many of the pre-crisis DSGE models because practical econometric versions of those models were mainly designed to fit data periods that did not include financial crises. A major response to the limitations of those older DSGE models is an active research program to bring big financial shocks and various kinds of financial, learning, and labour market frictions into a new generation of DSGE models for guiding policy. The contributors to this book utilise models and modelling assumptions that go beyond particular modelling conventions. By using alternative yet plausible assumptions, they seek to enrich our knowledge and ability to explain macroeconomic phenomena. They contribute to expanding the frontier of macroeconomic knowledge in ways that will prove useful for macroeconomic policy.
Price and quantity indices are important, much-used measuring instruments, and it is therefore necessary to have a good understanding of their properties. When it was published, this book is the first comprehensive text on index number theory since Irving Fisher's 1922 The Making of Index Numbers. The book covers intertemporal and interspatial comparisons; ratio- and difference-type measures; discrete and continuous time environments; and upper- and lower-level indices. Guided by economic insights, this book develops the instrumental or axiomatic approach. There is no role for behavioural assumptions. In addition to subject matter chapters, two entire chapters are devoted to the rich history of the subject.
The book's comprehensive coverage on the application of econometric methods to empirical analysis of economic issues is impressive. It uncovers the missing link between textbooks on economic theory and econometrics and highlights the powerful connection between economic theory and empirical analysis perfectly through examples on rigorous experimental design. The use of data sets for estimation derived with the Monte Carlo method helps facilitate the understanding of the role of hypothesis testing applied to economic models. Topics covered in the book are: consumer behavior, producer behavior, market equilibrium, macroeconomic models, qualitative-response models, panel data analysis and time-series analysis. Key econometric models are introduced, specified, estimated and evaluated. The treatment on methods of estimation in econometrics and the discipline of hypothesis testing makes it a must-have for graduate students of economics and econometrics and aids their understanding on how to estimate economic models and evaluate the results in terms of policy implications.
In economics, many quantities are related to each other. Such
economic relations are often much more complex than relations in
science and engineering, where some quantities are independence and
the relation between others can be well approximated by
linear To make economic models more adequate, we need more accurate
techniques for describing dependence. Such techniques are currently
being developed. This book contains description of state-of-the-art
techniques for modeling dependence and economic applications
of
The present book is the offspring of my Habilitation, which is the key to academic tenure in Austria. Legal requirements demand that a Ha bilitation be published and so only seeing it in print marks the real end of this biographical landmark project. From a scientific perspective I may hope to finally reach a broader audience with this book for a criti cal appraisal of the research done. Aside from objectives the book is a reflection of many years of research preceding Habilitation proper in the field of efficiency measurement. Regarding the subject matter the main intention was to fill an important remaining gap in the efficiency analysis literature. Hitherto no technique was available to estimate output-specific efficiencies in a statistically convincing way. This book closes this gap, although some desirable improvements and generalizations of the proposed estimation technique may yet be required, before it will eventually establish as standard tool for efficiency analysis. The likely audience for this book includes professional researchers, who want to enrich their tool set for applied efficiency analysis, as well as students of economics, management science or operations research, in tending to learn more about the potentials of rigorously understood efficiency analysis. But also managers or public officials potentially or dering efficiency studies should benefit from the book by learning about the extended capabilities of efficiency analysis. Just reading the intro duction may change their perception of value for money when it comes to comparative performance measurement."
Game theory is concerned with strategic interaction among several decision-makers. In such strategic encounters, all players are aware of the fact that their actions affect the other players. Game theory analyzes how these strategic, interactive considerations may affect the players' decisions and influence the final outcome. This textbook focuses on applications of complete-information games in economics and management, as well as in other fields such as political science, law and biology. It guides students through the fundamentals of game theory by letting examples lead the way to the concepts needed to solve them. It provides opportunities for self-study and self-testing through an extensive pedagogical apparatus of examples, questions and answers. The book also includes more advanced material suitable as a basis for seminar papers or elective topics, including rationalizability, stability of equilibria (with discrete-time dynamics), games and evolution, equilibrium selection and global games.
J. S. FLEMMING The Bank of England's role as a leading central bank involves both formal and informal aspects. At a formal level it is an adviser to HM Government, whilst at an informal level it is consulted by domestic and overseas institutions for advice on many areas of economic interest. Such advice must be grounded in an understanding of the workings of the domestic and international economy-a task which becomes ever more difficult with the pace of change both in the economy and in the techniques which are used by professional economists to analyse such changes. The Bank's economists are encouraged to publish their research whenever circumstances permit, whether in refereed journals or in other ways. In particular, we make it a rule that the research underlying the Bank's macroeconometric model, to which outside researchers have access through the ESRC (Economic and Social Research Council) macromodelling bureau, should be adequately explained and documented in published form. This volume expands the commitment to make research which is undertaken within the Economics Division of the Bank of England widely available. Included here are chapters which illustrate the breadth of interests which the Bank seeks to cover. Some of the research is, as would be expected, directly related to the specification of the Bank's model, but other aspects are also well represented.
This book investigates the existence of stochastic and deterministic convergence of real output per worker and the sources of output (physical capital per worker, human capital per worker, total factor productivity -TFP- and average annual hours worked) in 21 OECD countries over the period 1970-2011. Towards this end, the authors apply a large battery of panel unit root and stationarity tests, all of which are robust to the presence of cross-sectional dependence. The evidence fails to provide clear-cut evidence of convergence dynamics either in real GDP per worker or in the series of the sources of output. Due to some limitations associated with second-generation panel unit root and stationarity tests, the authors further use the more flexible PANIC approach which provides evidence that real GDP per worker, real physical capital per worker, human capital and average annual hours exhibit some degree of deterministic convergence, whereas TFP series display a high degree of stochastic convergence.
This book investigates whether the effects of economic integration differ according to the size of countries. The analysis incorporates a classification of the size of countries, reflecting the key economic characteristics of economies in order to provide an appropriate benchmark for each size group in the empirical analysis of the effects of asymmetric economic integration. The formation or extension of Preferential Trade Areas (PTAs) leads to a reduction in trade costs. This poses a critical secondary question as to the extent to which trade costs differ according to the size of countries. The extent to which membership of PTAs has an asymmetric impact on trade flow according to the size of member countries is analyzed by employing econometric tools and general equilibrium analysis, estimating both the ex-post and ex-ante effects of economic integration on the size of countries, using a data set of 218 countries, 45 of which are European. ?
Two central problems in the pure theory of economic growth are analysed in this monograph: 1) the dynamic laws governing the economic growth processes, 2) the kinematic and geometric properties of the set of solutions to the dynamic systems. With allegiance to rigor and the emphasis on the theoretical fundamentals of prototype mathematical growth models, the treatise is written in the theorem-proof style. To keep the exposition orderly and as smooth as possible, the economic analysis has been separated from the purely mathematical issues, and hence the monograph is organized in two books. Regarding the scope and content of the two books, an "Introduction and Over view" has been prepared to offer both motivation and a brief account. The introduc tion is especially designed to give a recapitulation of the mathematical theory and results presented in Book II, which are used as the unifying mathematical framework in the analysis and exposition of the different economic growth models in Book I. Economists would probably prefer to go directly to Book I and proceed by consult ing the mathematical theorems of Book II in confirming the economic theorems in Book I. Thereby, both the independence and interdependence of the economic and mathematical argumentations are respected.
Over the last decade or so, applied general equilibrium models have rapidly become a major tool for policy advice on issues regarding allocation and efficiency, most notably taxes and tariffs. This reflects the power of the general equilibrium approach to allocative questions and the capability of today's applied models to come up with realistic answers. However, it by no means implies that the theoretical, practical and empirical problems faced by researchers in applied modelling have all been solved in a satisfactory way. Rather, a promising field of research has been opened up, inviting theorists and practitioners to further explore and exploit its potential. The state of the art in applied general equilibrium modelling is reflected in this volume. The introductory Chapter (Part I) evaluates the use of economic modelling to address policy questions, and discusses the advantages and disadvantages of applied general equilibrium models. Three substantive issues are dealt with in Chapters 2-8: Tax Reform and Capital (Part II), Intertemporal Aspects and Expectations (Part III), and Taxes and the Labour Market (Part IV). While all parts contain results relevant for economic policy, it is clear that theory and applications for these areas are in different stages of development. We hope that this book will bring inspiration, insight and information to researchers, students and policy advisors. |
You may like...
Hidden Figures - The Untold Story of the…
Margot Lee Shetterly
Paperback
(2)
|