![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This monograph examines the domain of classical political economy using the methodologies developed in recent years both by the new discipline of econo-physics and by computing science. This approach is used to re-examine the classical subdivisions of political economy: production, exchange, distribution and finance. The book begins by examining the most basic feature of economic life production and asks what it is about physical laws that allows production to take place. How is it that human labour is able to modify the world? It looks at the role that information has played in the process of mass production and the extent to which human labour still remains a key resource. The Ricardian labour theory of value is re-examined in the light of econophysics, presenting agent based models in which the Ricardian theory of value appears as an emergent property. The authors present models giving rise to the class distribution of income, and the long term evolution of profit rates in market economies. Money is analysed using tools drawn both from computer science and the recent Chartalist school of financial theory. Covering a combination of techniques drawn from three areas, classical political economy, theoretical computer science and econophysics, to produce models that deepen our understanding of economic reality, this new title will be of interest to higher level doctoral and research students, as well as scientists working in the field of econophysics.
The impact of globalization of financial markets is a highly debated topic, particularly in recent months when the issue of globalization and contagion of financial distress has become a focus of intense policy debate. The papers in this volume provide an up-to-date overview of the key issues in this debate. While most of the contributions were prepared after the initial outbreak of the current global turmoil and financial crisis, they identify the relative strengths of the risk diversification and risk transmission processes and examine the empirical evidence to date. The book considers the relative roles of banks, nonbank financial institutions and capital markets in both risk diversification and risk transmission. It then evaluates the current status of crisis resolution in a global context, and speculates where to go from here in terms of understanding, resolution, prevention and public policy.
Introduction to Functional Data Analysis provides a concise textbook introduction to the field. It explains how to analyze functional data, both at exploratory and inferential levels. It also provides a systematic and accessible exposition of the methodology and the required mathematical framework. The book can be used as textbook for a semester-long course on FDA for advanced undergraduate or MS statistics majors, as well as for MS and PhD students in other disciplines, including applied mathematics, environmental science, public health, medical research, geophysical sciences and economics. It can also be used for self-study and as a reference for researchers in those fields who wish to acquire solid understanding of FDA methodology and practical guidance for its implementation. Each chapter contains plentiful examples of relevant R code and theoretical and data analytic problems. The material of the book can be roughly divided into four parts of approximately equal length: 1) basic concepts and techniques of FDA, 2) functional regression models, 3) sparse and dependent functional data, and 4) introduction to the Hilbert space framework of FDA. The book assumes advanced undergraduate background in calculus, linear algebra, distributional probability theory, foundations of statistical inference, and some familiarity with R programming. Other required statistics background is provided in scalar settings before the related functional concepts are developed. Most chapters end with references to more advanced research for those who wish to gain a more in-depth understanding of a specific topic.
Big data is presenting challenges to cybersecurity. For an example, the Internet of Things (IoT) will reportedly soon generate a staggering 400 zettabytes (ZB) of data a year. Self-driving cars are predicted to churn out 4000 GB of data per hour of driving. Big data analytics, as an emerging analytical technology, offers the capability to collect, store, process, and visualize these vast amounts of data. Big Data Analytics in Cybersecurity examines security challenges surrounding big data and provides actionable insights that can be used to improve the current practices of network operators and administrators. Applying big data analytics in cybersecurity is critical. By exploiting data from the networks and computers, analysts can discover useful network information from data. Decision makers can make more informative decisions by using this analysis, including what actions need to be performed, and improvement recommendations to policies, guidelines, procedures, tools, and other aspects of the network processes. Bringing together experts from academia, government laboratories, and industry, the book provides insight to both new and more experienced security professionals, as well as data analytics professionals who have varying levels of cybersecurity expertise. It covers a wide range of topics in cybersecurity, which include: Network forensics Threat analysis Vulnerability assessment Visualization Cyber training. In addition, emerging security domains such as the IoT, cloud computing, fog computing, mobile computing, and cyber-social networks are examined. The book first focuses on how big data analytics can be used in different aspects of cybersecurity including network forensics, root-cause analysis, and security training. Next it discusses big data challenges and solutions in such emerging cybersecurity domains as fog computing, IoT, and mobile app security. The book concludes by presenting the tools and datasets for future cybersecurity research.
Tourism demand is the foundation on which all tourism-related business decisions ultimately rest. Governments and companies such as airlines, tour operators, hotels, cruise ship lines, and recreation facility providers are interested in the demand for their products by tourists. The success of many businesses depends largely or totally on the state of tourism demand, and ultimate management failure is quite often due to the failure to meet market demand. This book introduces students, researchers and practitioners to the modern developments in advanced econometric methodology within the context of tourism demand analysis, and illustrates these developments with actual tourism applications. The concepts and computations of modern advanced econometric modelling methodologies are introduced at a level that is accessible to specialists and non-specialists alike. The methodologies introduced include general-to-specific modelling, cointegration, vector autoregression, time varying parameter modelling, panel data analysis and the almost ideal demand system (AIDS). In order to help the reader understand the various methodologies, extensive tourism demand examples are provided throughout the volume.
A highly readable, logically presented, unique guide to asset allocation strategies and technical analysis, this work covers numerous investment alternatives including mutual funds and fixed income securities. Aby and Vaughn provide a comprehensive examination of point and figure charting and vertical bar analysis, combined with an approach that both improves timing and emphasizes the minimization of errors in data interpretation and investment decision making. The authors discuss ways to estimate price targets and provide unique forecasting methods for fixed-income and aggregate equity markets, using an intermarket perspective. This is an important and useful resource for professionals and other knowledgeable investors. Throughout the book, Aby and Vaughn challenge conventional and acceptable academic thinking. Through emphasis on smaller, more obscure capitalization issues, they reduce complex concepts to a highly readable framework pervaded by comprehensive coverages of a large number of investment options. Major topics featured include the illustration and application of critical concepts underlying vertical bar chart analysis; extensive coverage on contemporary strategies that improve timing and challenge past criticisms of point and figure charting; a unique approach utilizing the point and figure charts to reveal how mutual fund selection can be improved; and intermarket technical analysis, a method through which movements in bond prices and yields are predicted.
How we pay is so fundamental that it underpins everything – from trade to taxation, stocks and savings to salaries, pensions and pocket money. Rich or poor, criminal, communist or capitalist, we all rely on the same payments system, day in, day out. It sits between us and not just economic meltdown, but a total breakdown in law and order. Why then do we know so little about how that system really works? Leibbrandt and de Terán shine a light on the hidden workings of the humble payment – and reveal both how our payment habits are determined by history as well as where we might go next. From national customs to warring nation states, geopolitics will shape the future of payments every bit as much as technology. Challenging our understanding about where financial power really lies, The Pay Off shows us that the most important thing about money is the way we move it.
Macroeconometric models, in many ways the flagships of the economist's profession in the 1960s, came under increasing attack from both theoretical economist and practitioners in the late 1970s. Critics referred to their lack of microeconomic theoretical foundations, ad hoc models of expectations, lack of identification, neglect of dynamics and non-stationarity, and poor forecasting properties. By the start of the 1990s, the status of macroeconometric models had declined markedly, and had fallen completely out of, and with, academic economics. Nevertheless, unlike the dinosaurs to which they often have been likened, macroeconometric models have never completely disappeared from the scene. This book describes how and why the discipline of macroeconometric modelling continues to play a role for economic policymaking by adapting to changing demands, in response, for instance, to new policy regimes like inflation targeting. Model builders have adopted new insights from economic theory and taken advantage of the methodological and conceptual advances within time series econometrics over the last twenty years. The modelling of wages and prices takes a central part in the book as the authors interpret and evaluate the last forty years of international research experience in the light of the Norwegian 'main course' model of inflation in a small open economy. The preferred model is a dynamic model of incomplete competition, which is evaluated against alternatives as diverse as the Phillips curve, Nickell-Layard wage curves, the New Keynesian Phillips curve, and monetary inflation models on data from the Euro area, the UK, and Norway. The wage price core model is built into a small econometric model for Norway to analyse the transmission mechanism and to evaluate monetary policy rules. The final chapter explores the main sources of forecast failure likely to occur in a practical modelling situation, using the large-scale nodel RIMINI and the inflation models of earlier chapters as case studies.
There is no book currently available that gives a comprehensive treatment of the design, construction, and use of index numbers. However, there is a pressing need for one in view of the increasing and more sophisticated employment of index numbers in the whole range of applied economics and specifically in discussions of macroeconomic policy. In this book, R. G. D. Allen meets this need in simple and consistent terms and with comprehensive coverage. The text begins with an elementary survey of the index-number problem before turning to more detailed treatments of the theory and practice of index numbers. The binary case in which one time period is compared with another is first developed and illustrated with numerous examples. This is to prepare the ground for the central part of the text on runs of index numbers. Particular attention is paid both to fixed-weighted and to chain forms as used in a wide range of published index numbers taken mainly from British official sources. This work deals with some further problems in the construction of index numbers, problems which are both troublesome and largely unresolved. These include the use of sampling techniques in index-number design and the theoretical and practical treatment of quality changes. It is also devoted to a number of detailed and specific applications of index-number techniques to problems ranging from national-income accounting, through the measurement of inequality of incomes and international comparisons of real incomes, to the use of index numbers of stock-market prices. Aimed primarily at students of economics, whatever their age and range of interests, this work will also be of use to those who handle index numbers professionally. "R. G. D. Allen" (1906-1983) was Professor Emeritus at the University of London. He was also once president of the Royal Statistical Society and Treasurer of the British Academy where he was a fellow. He is the author of "Basic Mathematics," "Mathematical Analysis for Economists," "Mathematical Economics" and "Macroeconomic Theory."
This textbook addresses postgraduate students in applied mathematics, probability, and statistics, as well as computer scientists, biologists, physicists and economists, who are seeking a rigorous introduction to applied stochastic processes. Pursuing a pedagogic approach, the content follows a path of increasing complexity, from the simplest random sequences to the advanced stochastic processes. Illustrations are provided from many applied fields, together with connections to ergodic theory, information theory, reliability and insurance. The main content is also complemented by a wealth of examples and exercises with solutions.
Showcasing fuzzy set theory, this book highlights the enormous potential of fuzzy logic in helping to analyse the complexity of a wide range of socio-economic patterns and behaviour. The contributions to this volume explore the most up-to-date fuzzy-set methods for the measurement of socio-economic phenomena in a multidimensional and/or dynamic perspective. Thus far, fuzzy-set theory has primarily been utilised in the social sciences in the field of poverty measurement. These chapters examine the latest work in this area, while also exploring further applications including social exclusion, the labour market, educational mismatch, sustainability, quality of life and violence against women. The authors demonstrate that real-world situations are often characterised by imprecision, uncertainty and vagueness, which cannot be properly described by the classical set theory which uses a simple true-false binary logic. By contrast, fuzzy-set theory has been shown to be a powerful tool for describing the multidimensionality and complexity of social phenomena. This book will be of significant interest to economists, statisticians and sociologists utilising quantitative methods to explore socio-economic phenomena.
The two-volume book studies the economic and industrial development of Japan and China in modern times and draws distinctions between the different paths of industrialization and economic modernization taken in the two countries, based on statistical materials, quantitative analysis and multivariate statistical analysis. The first volume analyses the relationship between technological innovation and economic development in Japan before World War II and sheds light on technological innovation in the Japanese context with particular emphasis on the importance of the patent system. The second volume studies the basic conditions and overall economic development of industrial development, chiefly during the period of the Republic of China (1912-1949), taking a comparative perspective and bringing the case of modern Japan into the discussion. The book will appeal to academics and general readers interested in economic development and the modern economic history of East Asia, development economics, as well as industrial and technological history.
Bringing together leading-edge research and innovative energy markets econometrics, this book collects the author's most important recent contributions in energy economics. In particular, the book:* applies recent advances in the field of applied econometrics to investigate a number of issues regarding energy markets, including the theory of storage and the efficient markets hypothesis* presents the basic stylized facts on energy price movements using correlation analysis, causality tests, integration theory, cointegration theory, as well as recently developed procedures for testing for shared and codependent cycles* uses recent advances in the financial econometrics literature to model time-varying returns and volatility in energy prices and to test for causal relationships between energy prices and their volatilities* explores the functioning of electricity markets and applies conventional models of time series analysis to investigate a number of issues regarding wholesale power prices in the western North American markets* applies tools from statistics and dynamical systems theory to test for nonlinear dynamics and deterministic chaos in a number of North American hydrocarbon markets (those of ethane, propane, normal butane, iso-butane, naptha, crude oil, and natural gas)
A fascinating and comprehensive history, this book explores the most important transformation in twentieth century economics: the creation of econometrics. Containing fresh archival material that has not been published before and taking Ragnar Frisch as the narrator, Francisco Louca discusses both the keys events - the establishment of the Econometric Society, the Cowles Commission and the journal Econometrica - and the major players - economists like Wesley Mitchell, mathematicians like John von Neumann and statisticians like Karl Pearson - in history that shaped the development of econometrics. He discusses the evolution of their thought, detailing the debates, the quarrels and the interrogations that crystallized their work and even offers a conclusion of sorts, suggesting that some of the more influential thinkers abandoned econometrics or became critical of its development. International in scope and appeal, The Years of High Econometrics is an excellent accompaniment for students taking courses on probability, econometric methods and the history of economic thought.
This new text book by Urs Birchler and Monika Butler is an
introduction to the study of how information affects economic
relations. The authors provide a narrative treatment of the more
formal concepts of Information Economics, using easy to understand
and lively illustrations from film and literature and nutshell
examples. This book also comes with a supporting website (www.alicebob.info), maintained by the authors.
Examining the crucial topic of race relations, this book explores the economic and social environments that play a significant role in determining economic outcomes and why racial disparities persist. With contributions from a range of international contributors including Edward Wolff and Catherine Weinberger, the book compares how various racial groups fare and are affected in different ways by economic and social institution. Themes covered in the book include:
This is an invaluable resource for researchers and academics across a number of disciplines including political economy, ethnic and multicultural studies, Asian studies, and sociology.
The book first discusses in depth various aspects of the well-known
inconsistency that arises when explanatory variables in a linear
regression model are measured with error. Despite this
inconsistency, the region where the true regression coeffecients
lies can sometimes be characterized in a useful way, especially
when bounds are known on the measurement error variance but also
when such information is absent. Wage discrimination with imperfect
productivity measurement is discussed as an important special case.
"Applied Econometrics for Health Economists" introduces readers to the appropriate econometric techniques for use with different forms of survey data, known collectively as microeconometrics. The book provides a complete illustration of the steps involved in doing microeconometric research. The only study to deal with practical analysis of qualitative and categorical variables, it also emphasises applied work, illustrating the use of relevant computer software applied to large-scale survey datasets. This is a comprehensive reference guide - it contains a glossary of terms, a technical appendix, software appendix, references, and suggestions for further reading. It is concise and easy to read - technical details are avoided in the main text and key terms are highlighted. It is essential reading for health economists as well as undergraduate and postgraduate students of health economics. "Given the extensive use of individual-level survey data in health economics, it is important to understand the econometric techniques available to applied researchers. Moreover, it is just as important to be aware of their limitations and pitfalls. The purpose of this book is to introduce readers to the appropriate econometric techniques for use with different forms of survey data - known collectively as microeconometrics." - Andrew Jones, in the Preface.
Tackling the cybersecurity challenge is a matter of survival for society at large. Cyber attacks are rapidly increasing in sophistication and magnitude-and in their destructive potential. New threats emerge regularly, the last few years having seen a ransomware boom and distributed denial-of-service attacks leveraging the Internet of Things. For organisations, the use of cybersecurity risk management is essential in order to manage these threats. Yet current frameworks have drawbacks which can lead to the suboptimal allocation of cybersecurity resources. Cyber insurance has been touted as part of the solution - based on the idea that insurers can incentivize companies to improve their cybersecurity by offering premium discounts - but cyber insurance levels remain limited. This is because companies have difficulty determining which cyber insurance products to purchase, and insurance companies struggle to accurately assess cyber risk and thus develop cyber insurance products. To deal with these challenges, this volume presents new models for cybersecurity risk management, partly based on the use of cyber insurance. It contains: A set of mathematical models for cybersecurity risk management, including (i) a model to assist companies in determining their optimal budget allocation between security products and cyber insurance and (ii) a model to assist insurers in designing cyber insurance products. The models use adversarial risk analysis to account for the behavior of threat actors (as well as the behavior of companies and insurers). To inform these models, we draw on psychological and behavioural economics studies of decision-making by individuals regarding cybersecurity and cyber insurance. We also draw on organizational decision-making studies involving cybersecurity and cyber insurance. Its theoretical and methodological findings will appeal to researchers across a wide range of cybersecurity-related disciplines including risk and decision analysis, analytics, technology management, actuarial sciences, behavioural sciences, and economics. The practical findings will help cybersecurity professionals and insurers enhance cybersecurity and cyber insurance, thus benefiting society as a whole. This book grew out of a two-year European Union-funded project under Horizons 2020, called CYBECO (Supporting Cyber Insurance from a Behavioral Choice Perspective).
The goal of Portfolio Rebalancing is to provide mathematical and empirical analysis of the effects of portfolio rebalancing on portfolio returns and risks. The mathematical analysis answers the question of when and why fixed-weight portfolios might outperform buy-and-hold portfolios based on volatilities and returns. The empirical analysis, aided by mathematical insights, will examine the effects of portfolio rebalancing in capital markets for asset allocation portfolios and portfolios of stocks, bonds, and commodities.
The quantitative modeling of complex systems of interacting risks is a fairly recent development in the financial and insurance industries. Over the past decades, there has been tremendous innovation and development in the actuarial field. In addition to undertaking mortality and longevity risks in traditional life and annuity products, insurers face unprecedented financial risks since the introduction of equity-linking insurance in 1960s. As the industry moves into the new territory of managing many intertwined financial and insurance risks, non-traditional problems and challenges arise, presenting great opportunities for technology development. Today's computational power and technology make it possible for the life insurance industry to develop highly sophisticated models, which were impossible just a decade ago. Nonetheless, as more industrial practices and regulations move towards dependence on stochastic models, the demand for computational power continues to grow. While the industry continues to rely heavily on hardware innovations, trying to make brute force methods faster and more palatable, we are approaching a crossroads about how to proceed. An Introduction to Computational Risk Management of Equity-Linked Insurance provides a resource for students and entry-level professionals to understand the fundamentals of industrial modeling practice, but also to give a glimpse of software methodologies for modeling and computational efficiency. Features Provides a comprehensive and self-contained introduction to quantitative risk management of equity-linked insurance with exercises and programming samples Includes a collection of mathematical formulations of risk management problems presenting opportunities and challenges to applied mathematicians Summarizes state-of-arts computational techniques for risk management professionals Bridges the gap between the latest developments in finance and actuarial literature and the practice of risk management for investment-combined life insurance Gives a comprehensive review of both Monte Carlo simulation methods and non-simulation numerical methods Runhuan Feng is an Associate Professor of Mathematics and the Director of Actuarial Science at the University of Illinois at Urbana-Champaign. He is a Fellow of the Society of Actuaries and a Chartered Enterprise Risk Analyst. He is a Helen Corley Petit Professorial Scholar and the State Farm Companies Foundation Scholar in Actuarial Science. Runhuan received a Ph.D. degree in Actuarial Science from the University of Waterloo, Canada. Prior to joining Illinois, he held a tenure-track position at the University of Wisconsin-Milwaukee, where he was named a Research Fellow. Runhuan received numerous grants and research contracts from the Actuarial Foundation and the Society of Actuaries in the past. He has published a series of papers on top-tier actuarial and applied probability journals on stochastic analytic approaches in risk theory and quantitative risk management of equity-linked insurance. Over the recent years, he has dedicated his efforts to developing computational methods for managing market innovations in areas of investment combined insurance and retirement planning.
Statistics for Finance develops students' professional skills in statistics with applications in finance. Developed from the authors' courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Ito's formula, the Black-Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues. In addition, end-of-chapter exercises develop students' financial reasoning skills.
Since the publication of the first edition over 30 years ago, the literature related to Pareto distributions has flourished to encompass computer-based inference methods. Pareto Distributions, Second Edition provides broad, up-to-date coverage of the Pareto model and its extensions. This edition expands several chapters to accommodate recent results and reflect the increased use of more computer-intensive inference procedures. New to the Second Edition New material on multivariate inequality Recent ways of handling the problems of inference for Pareto models and their generalizations and extensions New discussions of bivariate and multivariate income and survival models This book continues to provide researchers with a useful resource for understanding the statistical aspects of Pareto and Pareto-like distributions. It covers income models and properties of Pareto distributions, measures of inequality for studying income distributions, inference procedures for Pareto distributions, and various multivariate Pareto distributions existing in the literature.
This book brings together the latest research in the areas of market microstructure and high-frequency finance along with new econometric methods to address critical practical issues in these areas of research. Thirteen chapters, each of which makes a valuable and significant contribution to the existing literature have been brought together, spanning a wide range of topics including information asymmetry and the information content in limit order books, high-frequency return distribution models, multivariate volatility forecasting, analysis of individual trading behaviour, the analysis of liquidity, price discovery across markets, market microstructure models and the information content of order flow. These issues are central both to the rapidly expanding practice of high frequency trading in financial markets and to the further development of the academic literature in this area. The volume will therefore be of immediate interest to practitioners and academics. This book was originally published as a special issue of European Journal of Finance. |
You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Agent-Based Modeling and Network…
Akira Namatame, Shu-Heng Chen
Hardcover
R2,970
Discovery Miles 29 700
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
|