![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > General
The 'Advances in Econometrics' series aims to publish annual original scholarly econometrics papers on designated topics with the intention of expanding the use of developed and emerging econometric techniques by disseminating ideas on the theory and practice of econometrics throughout the empirical economic, business and social science literature.
First Published in 1970. Econometric model-building, on the other hand, has been largely confined to the advanced industrialised countries. In the few cases where macro-models have been built for underdeveloped countries (e.g. the Narasimham model (112) for India) the underlying assumptions have been largely of the Keynesian type, and thus in the authors opinion unconnected with the theory of economic development. This study is a modest attempt at econometric model-building on the basis of a model of development of an underdeveloped country.
'Big data' is now readily available to economic historians, thanks to the digitisation of primary sources, collaborative research linking different data sets, and the publication of databases on the internet. Key economic indicators, such as the consumer price index, can be tracked over long periods, and qualitative information, such as land use, can be converted to a quantitative form. In order to fully exploit these innovations it is necessary to use sophisticated statistical techniques to reveal the patterns hidden in datasets, and this book shows how this can be done. A distinguished group of economic historians have teamed up with younger researchers to pilot the application of new techniques to 'big data'. Topics addressed in this volume include prices and the standard of living, money supply, credit markets, land values and land use, transport, technological innovation, and business networks. The research spans the medieval, early modern and modern periods. Research methods include simultaneous equation systems, stochastic trends and discrete choice modelling. This book is essential reading for doctoral and post-doctoral researchers in business, economic and social history. The case studies will also appeal to historical geographers and applied econometricians.
This book shows how our lives are shaped not only by the choices we make, but by the choices we have. From dating, school and university applications to the job market, understand the most important decisions you'll ever make with insights from a Nobel Prize-winner. Who Gets What and Why is a piquantly written, mind-expanding exploration of the markets that matter most to many of us. If you've ever sought a job or hired someone, applied to university or guided your child into a good school, asked someone out on a date or been asked out, you have participated in a matching market. They are everywhere around us and account for some of the biggest technological successes of the decade, like Uber and Airbnb. Matching markets can even be the gatekeeper of life itself, guiding how desperately ill patients receive scarce organs for transplants. Alvin E. Roth shared the 2012 Nobel Prize in economics for his pioneering research into market design - the principles that govern all kinds of markets where money isn't the only factor in determining who gets what. His book reveals what factors make these markets work well - or badly - and shows us all how to recognise a good match and make smarter, more confident decisions.
This title provides a comprehensive, critical coverage of the progress and development of mathematical modelling within urban and regional economics over four decades.
Portfolio theory and much of asset pricing, as well as many empirical applications, depend on the use of multivariate probability distributions to describe asset returns. Traditionally, this has meant the multivariate normal (or Gaussian) distribution. More recently, theoretical and empirical work in financial economics has employed the multivariate Student (and other) distributions which are members of the elliptically symmetric class. There is also a growing body of work which is based on skew-elliptical distributions. These probability models all exhibit the property that the marginal distributions differ only by location and scale parameters or are restrictive in other respects. Very often, such models are not supported by the empirical evidence that the marginal distributions of asset returns can differ markedly. Copula theory is a branch of statistics which provides powerful methods to overcome these shortcomings. This book provides a synthesis of the latest research in the area of copulae as applied to finance and related subjects such as insurance. Multivariate non-Gaussian dependence is a fact of life for many problems in financial econometrics. This book describes the state of the art in tools required to deal with these observed features of financial data. This book was originally published as a special issue of the European Journal of Finance.
In nonparametric and high-dimensional statistical models, the classical Gauss-Fisher-Le Cam theory of the optimality of maximum likelihood estimators and Bayesian posterior inference does not apply, and new foundations and ideas have been developed in the past several decades. This book gives a coherent account of the statistical theory in infinite-dimensional parameter spaces. The mathematical foundations include self-contained 'mini-courses' on the theory of Gaussian and empirical processes, approximation and wavelet theory, and the basic theory of function spaces. The theory of statistical inference in such models - hypothesis testing, estimation and confidence sets - is presented within the minimax paradigm of decision theory. This includes the basic theory of convolution kernel and projection estimation, but also Bayesian nonparametrics and nonparametric maximum likelihood estimation. In a final chapter the theory of adaptive inference in nonparametric models is developed, including Lepski's method, wavelet thresholding, and adaptive inference for self-similar functions. Winner of the 2017 PROSE Award for Mathematics.
In the twentieth century, Americans thought of the United States as a land of opportunity and equality. To what extent and for whom this was true was, of course, a matter of debate, however especially during the Cold War, many Americans clung to the patriotic conviction that America was the land of the free. At the same time, another national ideal emerged that was far less contentious, that arguably came to subsume the ideals of freedom, opportunity, and equality, and that eventually embodied an unspoken consensus about what constitutes the good society in a postmodern setting. This was the ideal of choice, broadly understood as the proposition that the good society provides individuals with the power to shape the contours of their lives in ways that suit their personal interests, idiosyncrasies, and tastes. By the closing decades of the century, Americans were widely agreed that theirs was-or at least should be-the land of choice. In A Destiny of Choice?, David Blanke and David Steigerwald bring together important scholarship on the tension between two leading interpretations of modern American consumer culture. That modern consumerism reflects the social, cultural, economic, and political changes that accompanied the country's transition from a local, producer economy dominated by limited choices and restricted credit to a national consumer marketplace based on the individual selection of mass-produced, mass-advertised, and mass-distributed goods. This debate is central to the economic difficulties seen in the United States today.
This book presents models and statistical methods for the analysis of recurrent event data. The authors provide broad, detailed coverage of the major approaches to analysis, while emphasizing the modeling assumptions that they are based on. More general intensity-based models are also considered, as well as simpler models that focus on rate or mean functions. Parametric, nonparametric and semiparametric methodologies are all covered, with procedures for estimation, testing and model checking.
This book constitutes the first serious attempt to explain the basics of econometrics and its applications in the clearest and simplest manner possible. Recognising the fact that a good level of mathematics is no longer a necessary prerequisite for economics/financial economics undergraduate and postgraduate programmes, it introduces this key subdivision of economics to an audience who might otherwise have been deterred by its complex nature.
Thoroughly classroom tested, this introductory text covers all the topics that constitute a foundation for basic econometrics, with concise and intuitive explanations of technical material. Important proofs are shown in detail; however, the focus is on developing regression models and understanding the residual
With a new author team contributing decades of practical experience, this fully updated and thoroughly classroom-tested second edition textbook prepares students and practitioners to create effective forecasting models and master the techniques of time series analysis. Taking a practical and example-driven approach, this textbook summarises the most critical decisions, techniques and steps involved in creating forecasting models for business and economics. Students are led through the process with an entirely new set of carefully developed theoretical and practical exercises. Chapters examine the key features of economic time series, univariate time series analysis, trends, seasonality, aberrant observations, conditional heteroskedasticity and ARCH models, non-linearity and multivariate time series, making this a complete practical guide. A companion website with downloadable datasets, exercises and lecture slides rounds out the full learning package.
This book covers the basics of processing and spectral analysis of monovariate discrete-time signals. The approach is practical, the aim being to acquaint the reader with the indications for and drawbacks of the various methods and to highlight possible misuses. The book is rich in original ideas, visualized in new and illuminating ways, and is structured so that parts can be skipped without loss of continuity. Many examples are included, based on synthetic data and real measurements from the fields of physics, biology, medicine, macroeconomics etc., and a complete set of MATLAB exercises requiring no previous experience of programming is provided. Prior advanced mathematical skills are not needed in order to understand the contents: a good command of basic mathematical analysis is sufficient. Where more advanced mathematical tools are necessary, they are included in an Appendix and presented in an easy-to-follow way. With this book, digital signal processing leaves the domain of engineering to address the needs of scientists and scholars in traditionally less quantitative disciplines, now facing increasing amounts of data.
"Bayesian Econometrics" illustrates the scope and diversity of modern applications, reviews some recent advances, and highlights many desirable aspects of inference and computations. It begins with an historical overview by Arnold Zellner who describes key contributions to development and makes predictions for future directions. In the second paper, Giordani and Kohn makes suggestions for improving Markov chain Monte Carlo computational strategies. The remainder of the book is categorized according to microeconometric and time-series modeling. Models considered include an endogenous selection ordered probit model, a censored treatment-response model, equilibrium job search models and various other types. These are used to study a variety of applications for example dental insurance and care, educational attainment, voter opinions and the marketing share of various brands and an aggregate cross-section production function. Models and topics considered include the potential problem of improper posterior densities in a variety of dynamic models, selection and averaging for forecasting with vector autoregressions, a consumption capital-asset pricing model and various others. Applications involve U.S. macroeconomic variables, exchange rates, an investigation of purchasing power parity, data from London Metals Exchange, international automobile production data, and data from the Asian stock market.
Heavy tails -extreme events or values more common than expected -emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.
Upon the backdrop of impressive progress made by the Indian economy during the last two decades after the large-scale economic reforms in the early 1990s, this book evaluates the performance of the economy on some income and non-income dimensions of development at the national, state and sectoral levels. It examines regional economic growth and inequality in income originating from agriculture, industry and services. In view of the importance of the agricultural sector, despite its declining share in gross domestic product, it evaluates the performance of agricultural production and the impact of agricultural reforms on spatial integration of food grain markets. It studies rural poverty, analyzing the trend in employment, the trickle-down process and the inclusiveness of growth in rural India. It also evaluates the impact of microfinance, as an instrument of financial inclusion, on the socio-economic conditions of rural households. Lastly, it examines the relative performance of fifteen major states of India in terms of education, health and human development. An important feature of the book is that it approaches these issues, applying rigorously advanced econometric methods, and focusing primarily on their regional disparities during the post-reform period vis-a-vis the pre-reform period. It offers important results to guide policies for future development.
Building on the strength of the first edition, Quantitative Methods for Business and Economics provides a simple introduction to the mathematical and statistical techniques needed in business. This book is accessible and easy to use, with the emphasis clearly on how to apply quantitative techniques to business situations. It includes numerous real world applications and many opportunities for student interaction. It is clearly focused on business, management and economics students taking a single module in Quantitative Methods.
This volume of Advances in Econometrics contains articles that examine key topics in the modeling and estimation of dynamic stochastic general equilibrium (DSGE) models. Because DSGE models combine micro- and macroeconomic theory with formal econometric modeling and inference, over the past decade they have become an established framework for analyzing a variety of issues in empirical macroeconomics. The research articles make contributions in several key areas in DSGE modeling and estimation. In particular, papers cover the modeling and role of expectations, the study of optimal monetary policy in two-country models, and the problem of non-invertibility. Other interesting areas of inquiry include the analysis of parameter identification in new open economy macroeconomic models and the modeling of trend inflation shocks. The second part of the volume is devoted to articles that offer innovations in econometric methodology. These papers advance new techniques for addressing major inferential problems and include discussion and applications of Laplace-type, frequency domain, empirical likelihood and method of moments estimators.
Across the social sciences there has been increasing focus on reproducibility, i.e., the ability to examine a study's data and methods to ensure accuracy by reproducing the study. Reproducible Econometrics Using R combines an overview of key issues and methods with an introduction to how to use them using open source software (R) and recently developed tools (R Markdown and bookdown) that allow the reader to engage in reproducible econometric research. Jeffrey S. Racine provides a step-by-step approach, and covers five sets of topics, i) linear time series models, ii) robust inference, iii) robust estimation, iv) model uncertainty, and v) advanced topics. The time series material highlights the difference between time-series analysis, which focuses on forecasting, versus cross-sectional analysis, where the focus is typically on model parameters that have economic interpretations. For the time series material, the reader begins with a discussion of random walks, white noise, and non-stationarity. The reader is next exposed to the pitfalls of using standard inferential procedures that are popular in cross sectional settings when modelling time series data, and is introduced to alternative procedures that form the basis for linear time series analysis. For the robust inference material, the reader is introduced to the potential advantages of bootstrapping and the Jackknifing versus the use of asymptotic theory, and a range of numerical approaches are presented. For the robust estimation material, the reader is presented with a discussion of issues surrounding outliers in data and methods for addressing their presence. Finally, the model uncertainly material outlines two dominant approaches for dealing with model uncertainty, namely model selection and model averaging. Throughout the book there is an emphasis on the benefits of using R and other open source tools for ensuring reproducibility. The advanced material covers machine learning methods (support vector machines that are useful for classification) and nonparametric kernel regression which provides the reader with more advanced methods for confronting model uncertainty. The book is well suited for advanced undergraduate and graduate students alike. Assignments, exams, slides, and a solution manual are available for instructors.
Sociological theories of crime include: theories of strain blame crime on personal stressors; theories of social learning blame crime on its social rewards, and see crime more as an institution in conflict with other institutions rather than as in- vidual deviance; and theories of control look at crime as natural and rewarding, and explore the formation of institutions that control crime. Theorists of corruption generally agree that corruption is an expression of the Patron-Client relationship in which a person with access to resources trades resources with kin and members of the community in exchange for loyalty. Some approaches to modeling crime and corruption do not involve an explicit simulation: rule based systems; Bayesian networks; game theoretic approaches, often based on rational choice theory; and Neoclassical Econometrics, a rational choice-based approach. Simulation-based approaches take into account greater complexities of interacting parts of social phenomena. These include fuzzy cognitive maps and fuzzy rule sets that may incorporate feedback; and agent-based simulation, which can go a step farther by computing new social structures not previously identified in theory. The latter include cognitive agent models, in which agents learn how to perceive their en- ronment and act upon the perceptions of their individual experiences; and reactive agent simulation, which, while less capable than cognitive-agent simulation, is adequate for testing a policy's effects with existing societal structures. For example, NNL is a cognitive agent model based on the REPAST Simphony toolkit.
How do groups form, how do institutions come into being, and when do moral norms and practices emerge? This volume explores how game-theoretic approaches can be extended to consider broader questions that cross scales of organization, from individuals to cooperatives to societies. Game theory' strategic formulation of central problems in the analysis of social interactions is used to develop multi-level theories that examine the interplay between individuals and the collectives they form. The concept of cooperation is examined at a higher level than that usually addressed by game theory, especially focusing on the formation of groups and the role of social norms in maintaining their integrity, with positive and negative implications. The authors suggest that conventional analyses need to be broadened to explain how heuristics, like concepts of fairness, arise and become formalized into the ethical principles embraced by a society.
Financial models are an inescapable feature of modern financial markets. Yet it was over reliance on these models and the failure to test them properly that is now widely recognized as one of the main causes of the financial crisis of 2007-2011. Since this crisis, there has been an increase in the amount of scrutiny and testing applied to such models, and validation has become an essential part of model risk management at financial institutions. The book covers all of the major risk areas that a financial institution is exposed to and uses models for, including market risk, interest rate risk, retail credit risk, wholesale credit risk, compliance risk, and investment management. The book discusses current practices and pitfalls that model risk users need to be aware of and identifies areas where validation can be advanced in the future. This provides the first unified framework for validating risk management models.
Economists are regularly confronted with results of quantitative economics research. Econometrics: Theory and Applications with EViews provides a broad introduction to quantitative economic methods, for example how models arise, their underlying assumptions and how estimates of parameters or other economic quantities are computed. The author combines econometric theory with practice by demonstrating its use with the software package EViews through extensive use of screen shots. The emphasis is on understanding how to select the right method of analysis for a given situation, and how to actually apply the theoretical methodology correctly. The EViews software package is available from 'Quantitive Micro Software'. Written for any undergraduate or postgraduate course in Econometrics.
Score your highest in econometrics? Easy. Econometrics can prove challenging for many students unfamiliar with the terms and concepts discussed in a typical econometrics course. "Econometrics For Dummies "eliminates that confusion with easy-to-understand explanations of important topics in the study of economics. "Econometrics For Dummies "breaks down this complex subject and provides you with an easy-to-follow course supplement to further refine your understanding of how econometrics works and how it can be applied in real-world situations.An excellent resource for anyone participating in a college or graduate level econometrics courseProvides you with an easy-to-follow introduction to the techniques and applications of econometricsHelps you score high on exam day If you're seeking a degree in economics and looking for a plain-English guide to this often-intimidating course, "Econometrics For Dummies" has you covered.
A ground-breaking book that reveals why our human biases affect the way
we receive and interpret information |
You may like...
Financial and Macroeconomic…
Francis X. Diebold, Kamil Yilmaz
Hardcover
R3,567
Discovery Miles 35 670
Pricing Decisions in the Euro Area - How…
Silvia Fabiani, Claire Loupias, …
Hardcover
R2,160
Discovery Miles 21 600
Agent-Based Modeling and Network…
Akira Namatame, Shu-Heng Chen
Hardcover
R2,970
Discovery Miles 29 700
Ranked Set Sampling - 65 Years Improving…
Carlos N. Bouza-Herrera, Amer Ibrahim Falah Al-Omari
Paperback
Design and Analysis of Time Series…
Richard McCleary, David McDowall, …
Hardcover
R3,286
Discovery Miles 32 860
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
|