![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Macroeconometric models, in many ways the flagships of the economist's profession in the 1960s, came under increasing attack from both theoretical economist and practitioners in the late 1970s. Critics referred to their lack of microeconomic theoretical foundations, ad hoc models of expectations, lack of identification, neglect of dynamics and non-stationarity, and poor forecasting properties. By the start of the 1990s, the status of macroeconometric models had declined markedly, and had fallen completely out of, and with, academic economics. Nevertheless, unlike the dinosaurs to which they often have been likened, macroeconometric models have never completely disappeared from the scene. This book describes how and why the discipline of macroeconometric modelling continues to play a role for economic policymaking by adapting to changing demands, in response, for instance, to new policy regimes like inflation targeting. Model builders have adopted new insights from economic theory and taken advantage of the methodological and conceptual advances within time series econometrics over the last twenty years. The modelling of wages and prices takes a central part in the book as the authors interpret and evaluate the last forty years of international research experience in the light of the Norwegian 'main course' model of inflation in a small open economy. The preferred model is a dynamic model of incomplete competition, which is evaluated against alternatives as diverse as the Phillips curve, Nickell-Layard wage curves, the New Keynesian Phillips curve, and monetary inflation models on data from the Euro area, the UK, and Norway. The wage price core model is built into a small econometric model for Norway to analyse the transmission mechanism and to evaluate monetary policy rules. The final chapter explores the main sources of forecast failure likely to occur in a practical modelling situation, using the large-scale nodel RIMINI and the inflation models of earlier chapters as case studies.
There is no book currently available that gives a comprehensive treatment of the design, construction, and use of index numbers. However, there is a pressing need for one in view of the increasing and more sophisticated employment of index numbers in the whole range of applied economics and specifically in discussions of macroeconomic policy. In this book, R. G. D. Allen meets this need in simple and consistent terms and with comprehensive coverage. The text begins with an elementary survey of the index-number problem before turning to more detailed treatments of the theory and practice of index numbers. The binary case in which one time period is compared with another is first developed and illustrated with numerous examples. This is to prepare the ground for the central part of the text on runs of index numbers. Particular attention is paid both to fixed-weighted and to chain forms as used in a wide range of published index numbers taken mainly from British official sources. This work deals with some further problems in the construction of index numbers, problems which are both troublesome and largely unresolved. These include the use of sampling techniques in index-number design and the theoretical and practical treatment of quality changes. It is also devoted to a number of detailed and specific applications of index-number techniques to problems ranging from national-income accounting, through the measurement of inequality of incomes and international comparisons of real incomes, to the use of index numbers of stock-market prices. Aimed primarily at students of economics, whatever their age and range of interests, this work will also be of use to those who handle index numbers professionally. "R. G. D. Allen" (1906-1983) was Professor Emeritus at the University of London. He was also once president of the Royal Statistical Society and Treasurer of the British Academy where he was a fellow. He is the author of "Basic Mathematics," "Mathematical Analysis for Economists," "Mathematical Economics" and "Macroeconomic Theory."
Experimental methods in economics respond to circumstances that are
not completely dictated by accepted theory or outstanding problems.
While the field of economics makes sharp distinctions and produces
precise theory, the work of experimental economics sometimes appear
blurred and may produce results that vary from strong support to
little or partial support of the relevant theory.
This textbook addresses postgraduate students in applied mathematics, probability, and statistics, as well as computer scientists, biologists, physicists and economists, who are seeking a rigorous introduction to applied stochastic processes. Pursuing a pedagogic approach, the content follows a path of increasing complexity, from the simplest random sequences to the advanced stochastic processes. Illustrations are provided from many applied fields, together with connections to ergodic theory, information theory, reliability and insurance. The main content is also complemented by a wealth of examples and exercises with solutions.
Showcasing fuzzy set theory, this book highlights the enormous potential of fuzzy logic in helping to analyse the complexity of a wide range of socio-economic patterns and behaviour. The contributions to this volume explore the most up-to-date fuzzy-set methods for the measurement of socio-economic phenomena in a multidimensional and/or dynamic perspective. Thus far, fuzzy-set theory has primarily been utilised in the social sciences in the field of poverty measurement. These chapters examine the latest work in this area, while also exploring further applications including social exclusion, the labour market, educational mismatch, sustainability, quality of life and violence against women. The authors demonstrate that real-world situations are often characterised by imprecision, uncertainty and vagueness, which cannot be properly described by the classical set theory which uses a simple true-false binary logic. By contrast, fuzzy-set theory has been shown to be a powerful tool for describing the multidimensionality and complexity of social phenomena. This book will be of significant interest to economists, statisticians and sociologists utilising quantitative methods to explore socio-economic phenomena.
The two-volume book studies the economic and industrial development of Japan and China in modern times and draws distinctions between the different paths of industrialization and economic modernization taken in the two countries, based on statistical materials, quantitative analysis and multivariate statistical analysis. The first volume analyses the relationship between technological innovation and economic development in Japan before World War II and sheds light on technological innovation in the Japanese context with particular emphasis on the importance of the patent system. The second volume studies the basic conditions and overall economic development of industrial development, chiefly during the period of the Republic of China (1912-1949), taking a comparative perspective and bringing the case of modern Japan into the discussion. The book will appeal to academics and general readers interested in economic development and the modern economic history of East Asia, development economics, as well as industrial and technological history.
Bringing together leading-edge research and innovative energy markets econometrics, this book collects the author's most important recent contributions in energy economics. In particular, the book:* applies recent advances in the field of applied econometrics to investigate a number of issues regarding energy markets, including the theory of storage and the efficient markets hypothesis* presents the basic stylized facts on energy price movements using correlation analysis, causality tests, integration theory, cointegration theory, as well as recently developed procedures for testing for shared and codependent cycles* uses recent advances in the financial econometrics literature to model time-varying returns and volatility in energy prices and to test for causal relationships between energy prices and their volatilities* explores the functioning of electricity markets and applies conventional models of time series analysis to investigate a number of issues regarding wholesale power prices in the western North American markets* applies tools from statistics and dynamical systems theory to test for nonlinear dynamics and deterministic chaos in a number of North American hydrocarbon markets (those of ethane, propane, normal butane, iso-butane, naptha, crude oil, and natural gas)
A fascinating and comprehensive history, this book explores the most important transformation in twentieth century economics: the creation of econometrics. Containing fresh archival material that has not been published before and taking Ragnar Frisch as the narrator, Francisco Louca discusses both the keys events - the establishment of the Econometric Society, the Cowles Commission and the journal Econometrica - and the major players - economists like Wesley Mitchell, mathematicians like John von Neumann and statisticians like Karl Pearson - in history that shaped the development of econometrics. He discusses the evolution of their thought, detailing the debates, the quarrels and the interrogations that crystallized their work and even offers a conclusion of sorts, suggesting that some of the more influential thinkers abandoned econometrics or became critical of its development. International in scope and appeal, The Years of High Econometrics is an excellent accompaniment for students taking courses on probability, econometric methods and the history of economic thought.
This new text book by Urs Birchler and Monika Butler is an
introduction to the study of how information affects economic
relations. The authors provide a narrative treatment of the more
formal concepts of Information Economics, using easy to understand
and lively illustrations from film and literature and nutshell
examples. This book also comes with a supporting website (www.alicebob.info), maintained by the authors.
The book first discusses in depth various aspects of the well-known
inconsistency that arises when explanatory variables in a linear
regression model are measured with error. Despite this
inconsistency, the region where the true regression coeffecients
lies can sometimes be characterized in a useful way, especially
when bounds are known on the measurement error variance but also
when such information is absent. Wage discrimination with imperfect
productivity measurement is discussed as an important special case.
Examining the crucial topic of race relations, this book explores the economic and social environments that play a significant role in determining economic outcomes and why racial disparities persist. With contributions from a range of international contributors including Edward Wolff and Catherine Weinberger, the book compares how various racial groups fare and are affected in different ways by economic and social institution. Themes covered in the book include:
This is an invaluable resource for researchers and academics across a number of disciplines including political economy, ethnic and multicultural studies, Asian studies, and sociology.
"Applied Econometrics for Health Economists" introduces readers to the appropriate econometric techniques for use with different forms of survey data, known collectively as microeconometrics. The book provides a complete illustration of the steps involved in doing microeconometric research. The only study to deal with practical analysis of qualitative and categorical variables, it also emphasises applied work, illustrating the use of relevant computer software applied to large-scale survey datasets. This is a comprehensive reference guide - it contains a glossary of terms, a technical appendix, software appendix, references, and suggestions for further reading. It is concise and easy to read - technical details are avoided in the main text and key terms are highlighted. It is essential reading for health economists as well as undergraduate and postgraduate students of health economics. "Given the extensive use of individual-level survey data in health economics, it is important to understand the econometric techniques available to applied researchers. Moreover, it is just as important to be aware of their limitations and pitfalls. The purpose of this book is to introduce readers to the appropriate econometric techniques for use with different forms of survey data - known collectively as microeconometrics." - Andrew Jones, in the Preface.
Tackling the cybersecurity challenge is a matter of survival for society at large. Cyber attacks are rapidly increasing in sophistication and magnitude-and in their destructive potential. New threats emerge regularly, the last few years having seen a ransomware boom and distributed denial-of-service attacks leveraging the Internet of Things. For organisations, the use of cybersecurity risk management is essential in order to manage these threats. Yet current frameworks have drawbacks which can lead to the suboptimal allocation of cybersecurity resources. Cyber insurance has been touted as part of the solution - based on the idea that insurers can incentivize companies to improve their cybersecurity by offering premium discounts - but cyber insurance levels remain limited. This is because companies have difficulty determining which cyber insurance products to purchase, and insurance companies struggle to accurately assess cyber risk and thus develop cyber insurance products. To deal with these challenges, this volume presents new models for cybersecurity risk management, partly based on the use of cyber insurance. It contains: A set of mathematical models for cybersecurity risk management, including (i) a model to assist companies in determining their optimal budget allocation between security products and cyber insurance and (ii) a model to assist insurers in designing cyber insurance products. The models use adversarial risk analysis to account for the behavior of threat actors (as well as the behavior of companies and insurers). To inform these models, we draw on psychological and behavioural economics studies of decision-making by individuals regarding cybersecurity and cyber insurance. We also draw on organizational decision-making studies involving cybersecurity and cyber insurance. Its theoretical and methodological findings will appeal to researchers across a wide range of cybersecurity-related disciplines including risk and decision analysis, analytics, technology management, actuarial sciences, behavioural sciences, and economics. The practical findings will help cybersecurity professionals and insurers enhance cybersecurity and cyber insurance, thus benefiting society as a whole. This book grew out of a two-year European Union-funded project under Horizons 2020, called CYBECO (Supporting Cyber Insurance from a Behavioral Choice Perspective).
The goal of Portfolio Rebalancing is to provide mathematical and empirical analysis of the effects of portfolio rebalancing on portfolio returns and risks. The mathematical analysis answers the question of when and why fixed-weight portfolios might outperform buy-and-hold portfolios based on volatilities and returns. The empirical analysis, aided by mathematical insights, will examine the effects of portfolio rebalancing in capital markets for asset allocation portfolios and portfolios of stocks, bonds, and commodities.
The quantitative modeling of complex systems of interacting risks is a fairly recent development in the financial and insurance industries. Over the past decades, there has been tremendous innovation and development in the actuarial field. In addition to undertaking mortality and longevity risks in traditional life and annuity products, insurers face unprecedented financial risks since the introduction of equity-linking insurance in 1960s. As the industry moves into the new territory of managing many intertwined financial and insurance risks, non-traditional problems and challenges arise, presenting great opportunities for technology development. Today's computational power and technology make it possible for the life insurance industry to develop highly sophisticated models, which were impossible just a decade ago. Nonetheless, as more industrial practices and regulations move towards dependence on stochastic models, the demand for computational power continues to grow. While the industry continues to rely heavily on hardware innovations, trying to make brute force methods faster and more palatable, we are approaching a crossroads about how to proceed. An Introduction to Computational Risk Management of Equity-Linked Insurance provides a resource for students and entry-level professionals to understand the fundamentals of industrial modeling practice, but also to give a glimpse of software methodologies for modeling and computational efficiency. Features Provides a comprehensive and self-contained introduction to quantitative risk management of equity-linked insurance with exercises and programming samples Includes a collection of mathematical formulations of risk management problems presenting opportunities and challenges to applied mathematicians Summarizes state-of-arts computational techniques for risk management professionals Bridges the gap between the latest developments in finance and actuarial literature and the practice of risk management for investment-combined life insurance Gives a comprehensive review of both Monte Carlo simulation methods and non-simulation numerical methods Runhuan Feng is an Associate Professor of Mathematics and the Director of Actuarial Science at the University of Illinois at Urbana-Champaign. He is a Fellow of the Society of Actuaries and a Chartered Enterprise Risk Analyst. He is a Helen Corley Petit Professorial Scholar and the State Farm Companies Foundation Scholar in Actuarial Science. Runhuan received a Ph.D. degree in Actuarial Science from the University of Waterloo, Canada. Prior to joining Illinois, he held a tenure-track position at the University of Wisconsin-Milwaukee, where he was named a Research Fellow. Runhuan received numerous grants and research contracts from the Actuarial Foundation and the Society of Actuaries in the past. He has published a series of papers on top-tier actuarial and applied probability journals on stochastic analytic approaches in risk theory and quantitative risk management of equity-linked insurance. Over the recent years, he has dedicated his efforts to developing computational methods for managing market innovations in areas of investment combined insurance and retirement planning.
Since the publication of the first edition over 30 years ago, the literature related to Pareto distributions has flourished to encompass computer-based inference methods. Pareto Distributions, Second Edition provides broad, up-to-date coverage of the Pareto model and its extensions. This edition expands several chapters to accommodate recent results and reflect the increased use of more computer-intensive inference procedures. New to the Second Edition New material on multivariate inequality Recent ways of handling the problems of inference for Pareto models and their generalizations and extensions New discussions of bivariate and multivariate income and survival models This book continues to provide researchers with a useful resource for understanding the statistical aspects of Pareto and Pareto-like distributions. It covers income models and properties of Pareto distributions, measures of inequality for studying income distributions, inference procedures for Pareto distributions, and various multivariate Pareto distributions existing in the literature.
This book brings together the latest research in the areas of market microstructure and high-frequency finance along with new econometric methods to address critical practical issues in these areas of research. Thirteen chapters, each of which makes a valuable and significant contribution to the existing literature have been brought together, spanning a wide range of topics including information asymmetry and the information content in limit order books, high-frequency return distribution models, multivariate volatility forecasting, analysis of individual trading behaviour, the analysis of liquidity, price discovery across markets, market microstructure models and the information content of order flow. These issues are central both to the rapidly expanding practice of high frequency trading in financial markets and to the further development of the academic literature in this area. The volume will therefore be of immediate interest to practitioners and academics. This book was originally published as a special issue of European Journal of Finance.
Estimate and Interpret Results from Ordered Regression Models Ordered Regression Models: Parallel, Partial, and Non-Parallel Alternatives presents regression models for ordinal outcomes, which are variables that have ordered categories but unknown spacing between the categories. The book provides comprehensive coverage of the three major classes of ordered regression models (cumulative, stage, and adjacent) as well as variations based on the application of the parallel regression assumption. The authors first introduce the three "parallel" ordered regression models before covering unconstrained partial, constrained partial, and nonparallel models. They then review existing tests for the parallel regression assumption, propose new variations of several tests, and discuss important practical concerns related to tests of the parallel regression assumption. The book also describes extensions of ordered regression models, including heterogeneous choice models, multilevel ordered models, and the Bayesian approach to ordered regression models. Some chapters include brief examples using Stata and R. This book offers a conceptual framework for understanding ordered regression models based on the probability of interest and the application of the parallel regression assumption. It demonstrates the usefulness of numerous modeling alternatives, showing you how to select the most appropriate model given the type of ordinal outcome and restrictiveness of the parallel assumption for each variable. Web ResourceMore detailed examples are available on a supplementary website. The site also contains JAGS, R, and Stata codes to estimate the models along with syntax to reproduce the results.
Extreme Value Modeling and Risk Analysis: Methods and Applications presents a broad overview of statistical modeling of extreme events along with the most recent methodologies and various applications. The book brings together background material and advanced topics, eliminating the need to sort through the massive amount of literature on the subject. After reviewing univariate extreme value analysis and multivariate extremes, the book explains univariate extreme value mixture modeling, threshold selection in extreme value analysis, and threshold modeling of non-stationary extremes. It presents new results for block-maxima of vine copulas, develops time series of extremes with applications from climatology, describes max-autoregressive and moving maxima models for extremes, and discusses spatial extremes and max-stable processes. The book then covers simulation and conditional simulation of max-stable processes; inference methodologies, such as composite likelihood, Bayesian inference, and approximate Bayesian computation; and inferences about extreme quantiles and extreme dependence. It also explores novel applications of extreme value modeling, including financial investments, insurance and financial risk management, weather and climate disasters, clinical trials, and sports statistics. Risk analyses related to extreme events require the combined expertise of statisticians and domain experts in climatology, hydrology, finance, insurance, sports, and other fields. This book connects statistical/mathematical research with critical decision and risk assessment/management applications to stimulate more collaboration between these statisticians and specialists.
Economic evaluation has become an essential component of clinical trial design to show that new treatments and technologies offer value to payers in various healthcare systems. Although many books exist that address the theoretical or practical aspects of cost-effectiveness analysis, this book differentiates itself from the competition by detailing how to apply health economic evaluation techniques in a clinical trial context, from both academic and pharmaceutical/commercial perspectives. It also includes a special chapter for clinical trials in Cancer. Design & Analysis of Clinical Trials for Economic Evaluation & Reimbursement is not just about performing cost-effectiveness analyses. It also emphasizes the strategic importance of economic evaluation and offers guidance and advice on the complex factors at play before, during, and after an economic evaluation. Filled with detailed examples, the book bridges the gap between applications of economic evaluation in industry (mainly pharmaceutical) and what students may learn in university courses. It provides readers with access to SAS and STATA code. In addition, Windows-based software for sample size and value of information analysis is available free of charge-making it a valuable resource for students considering a career in this field or for those who simply wish to know more about applying economic evaluation techniques. The book includes coverage of trial design, case report form design, quality of life measures, sample sizes, submissions to regulatory authorities for reimbursement, Markov models, cohort models, and decision trees. Examples and case studies are provided at the end of each chapter. Presenting first-hand insights into how economic evaluations are performed from a drug development perspective, the book supplies readers with the foundation required to succeed in an environment where clinical trials and cost-effectiveness of new treatments are central. It also includes thought-provoking exercises for use in classroom and seminar discussions.
A fair question to ask of an advocate of subjective Bayesianism (which the author is) is "how would you model uncertainty?" In this book, the author writes about how he has done it using real problems from the past, and offers additional comments about the context in which he was working.
Proven Methods for Big Data Analysis As big data has become standard in many application areas, challenges have arisen related to methodology and software development, including how to discover meaningful patterns in the vast amounts of data. Addressing these problems, Applied Biclustering Methods for Big and High-Dimensional Data Using R shows how to apply biclustering methods to find local patterns in a big data matrix. The book presents an overview of data analysis using biclustering methods from a practical point of view. Real case studies in drug discovery, genetics, marketing research, biology, toxicity, and sports illustrate the use of several biclustering methods. References to technical details of the methods are provided for readers who wish to investigate the full theoretical background. All the methods are accompanied with R examples that show how to conduct the analyses. The examples, software, and other materials are available on a supplementary website.
This book explores how econometric modelling can be used to provide valuable insight into international housing markets. Initially describing the role of econometrics modelling in real estate market research and how it has developed in recent years, the book goes on to compare and contrast the impact of various macroeconomic factors on developed and developing housing markets. Explaining the similarities and differences in the impact of financial crises on housing markets around the world, the author's econometric analysis of housing markets across the world provides a broad and nuanced perspective on the impact of both international financial markets and local macro economy on housing markets. With discussion of countries such as China, Germany, UK, US and South Africa, the lessons learned will be of interest to scholars of Real Estate economics around the world.
This book contains the most complete set of the Chinese national income and its components based on system of national accounts. It points out some fundamental issues concerning the estimation of China's national income and it is intended to the students of the field of China study around the world.
Pathwise estimation and inference for diffusion market models discusses contemporary techniques for inferring, from options and bond prices, the market participants' aggregate view on important financial parameters such as implied volatility, discount rate, future interest rate, and their uncertainty thereof. The focus is on the pathwise inference methods that are applicable to a sole path of the observed prices and do not require the observation of an ensemble of such paths. This book is pitched at the level of senior undergraduate students undertaking research at honors year, and postgraduate candidates undertaking Master's or PhD degree by research. From a research perspective, this book reaches out to academic researchers from backgrounds as diverse as mathematics and probability, econometrics and statistics, and computational mathematics and optimization whose interest lie in analysis and modelling of financial market data from a multi-disciplinary approach. Additionally, this book is also aimed at financial market practitioners participating in capital market facing businesses who seek to keep abreast with and draw inspiration from novel approaches in market data analysis. The first two chapters of the book contains introductory material on stochastic analysis and the classical diffusion stock market models. The remaining chapters discuss more special stock and bond market models and special methods of pathwise inference for market parameter for different models. The final chapter describes applications of numerical methods of inference of bond market parameters to forecasting of short rate. Nikolai Dokuchaev is an associate professor in Mathematics and Statistics at Curtin University. His research interests include mathematical and statistical finance, stochastic analysis, PDEs, control, and signal processing. Lin Yee Hin is a practitioner in the capital market facing industry. His research interests include econometrics, non-parametric regression, and scientific computing. |
You may like...
|