![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
This book bridges the gap between economic theory and spatial econometric techniques. It is accessible to those with only a basic statistical background and no prior knowledge of spatial econometric methods. It provides a comprehensive treatment of the topic, motivating the reader with examples and analysis. The volume provides a rigorous treatment of the basic spatial linear model, and it discusses the violations of the classical regression assumptions that occur when dealing with spatial data.
Metrology is the study of measurement science. Although classical economists have emphasized the importance of measurement per se, the majority of economics-based writings on the topic have taken the form of government reports related to the activities of specific national metrology laboratories. This book is the first systematic study of measurement activity at a national metrology laboratory, and the laboratory studied is the U.S. National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce. The primary objective of the book is to emphasize for academic and policy audiences the economic importance of measurement not only as an area of study but also as a tool for sustaining technological advancement as an element of economic growth. Toward this goal, the book offers an overview of the economic benefits and consequences of measurement standards; an argument for public sector support of measurement standards; a historical perspective of the measurement activities at NIST; an empirical analysis of one particular measurement activity at NIST, namely calibration testing; and a roadmap for future research on the economics of metrology.
Computational Finance Using C and C#: Derivatives and Valuation, Second Edition provides derivatives pricing information for equity derivatives, interest rate derivatives, foreign exchange derivatives, and credit derivatives. By providing free access to code from a variety of computer languages, such as Visual Basic/Excel, C++, C, and C#, it gives readers stand-alone examples that they can explore before delving into creating their own applications. It is written for readers with backgrounds in basic calculus, linear algebra, and probability. Strong on mathematical theory, this second edition helps empower readers to solve their own problems. *Features new programming problems, examples, and exercises for each chapter. *Includes freely-accessible source code in languages such as C, C++, VBA, C#, and Excel.. *Includes a new chapter on the history of finance which also covers the 2008 credit crisis and the use of mortgage backed securities, CDSs and CDOs. *Emphasizes mathematical theory.
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
It is very useful and timely book as demand forecasting has become a very crucial tool and provides important information for destination on which policy are created and implemented. This is especially important given the complexities arising the aftermath of the Covid19 pandemic. * It looks at novel and recent developments in this field including judgement and scenario forecasting. * Offers a comprehensive approach to tourism econometrics, looking at a variety of aspects. * The authors are experts in this field and of the highest academic calibre.
This book surveys big data tools used in macroeconomic forecasting and addresses related econometric issues, including how to capture dynamic relationships among variables; how to select parsimonious models; how to deal with model uncertainty, instability, non-stationarity, and mixed frequency data; and how to evaluate forecasts, among others. Each chapter is self-contained with references, and provides solid background information, while also reviewing the latest advances in the field. Accordingly, the book offers a valuable resource for researchers, professional forecasters, and students of quantitative economics.
In 1956, Solow proposed a neoclassical growth model in opposition or as an alternative to Keynesian growth models. The Solow model of economic growth provided foundations for models embedded in the new theory of economic growth, known as the theory of endogenous growth, such as the renowned growth models developed by Paul M. Romer and Robert E. Lucas in the 1980s and 90s. The augmentations of the Solow model described in this book, excepting the Phelps golden rules of capital accumulation and the Mankiw-Romer-Weil and Nonneman-Vanhoudt models, were developed by the authors over the last two decades. The book identifies six spheres of interest in modern macroeconomic theory: the impact of fiscal and monetary policy on growth; the effect of different returns to scale on production; the influence of mobility of factors of production among different countries on their development; the effect of population dynamics on growth; the periodicity of investment rates and their influence on growth; and the effect of exogenous shocks in the form of an epidemic. For each of these issues, the authors construct and analyze an appropriate growth model that focuses on the description of the specific macroeconomic problem. This book not only continues the neoclassical tradition of thought in economics focused on quantitative economic change but also, and to a significant extent, discusses alternative approaches to certain questions of economic growth, utilizing conclusions that can be drawn from the Solow model. It is a useful tool in analyzing contemporary issues related to growth.
Operation Research methods are often used in every field of modern life like industry, economy and medicine. The authors have compiled of the latest advancements in these methods in this volume comprising some of what is considered the best collection of these new approaches. These can be counted as a direct shortcut to what you may search for. This book provides useful applications of the new developments in OR written by leading scientists from some international universities. Another volume about exciting applications of Operations Research is planned in the near future. We hope you enjoy and benefit from this series!
This trusted textbook returns in its 4th edition with even more exercises to help consolidate understanding - and a companion website featuring additional materials, including a solutions manual for instructors. Offering a unique blend of theory and practical application, it provides ideal preparation for doing applied econometric work as it takes students from a basic level up to an advanced understanding in an intuitive, step-by-step fashion. Clear presentation of economic tests and methods of estimation is paired with practical guidance on using several types of software packages. Using real world data throughout, the authors place emphasis upon the interpretation of results, and the conclusions to be drawn from them in econometric work. This book will be essential reading for economics undergraduate and master's students taking a course in applied econometrics. Its practical nature makes it ideal for modules requiring a research project. New to this Edition: - Additional practical exercises throughout to help consolidate understanding - A freshly-updated companion website featuring a new solutions manual for instructors
The role of franchising on industry evolution is explored in this book both in terms of the emergence of franchising and its impact on industry structure. Examining literature and statistical information the first section provides an overview of franchising. The Role of Franchising on Industry Evolution then focuses on two core elements; the emergence or franchising and the contextual drivers prompting its adoption, and the impact of franchising on industry-level structural changes. Through two industry case studies, the author demonstrates how franchising has the ability to fundamentally transform an industry's structure from one of fragmentation to one of consolidation.
This book discusses the developments in trade theories, including new-new trade models that account for firm level trade flows, trade growth accounting using inverse gravity models (including distortions in gravity models), the impact of trade liberalization under the aegis of regional and multilateral liberalization efforts of economies using partial and general equilibrium analysis, methodologies of constructing ad valorem equivalents of non-tariff barriers, volatility spillover effects of financial and exchange rate markets. The main purpose of the book is to guide researchers working in the area of international trade, especially focused on empirical analysis of trade policy issues by updating their knowledge on issues related to trade theory, empirical methods, and their applications. The book would prove useful for policy makers, academicians, and researchers.
This book describes the functions frequently used in deep neural networks. For this purpose, 37 activation functions are explained both mathematically and visually, and given with their LaTeX implementations due to their common use in scientific articles.
Based on economic knowledge and logical reasoning, this book proposes a solution to economic recessions and offers a route for societal change to end capitalism. The author starts with a brief review of the history of economics, and then questions and rejects the trend of recent decades that has seen econometrics replace economic theory. By reviewing the different schools of economic thought and by examining the limitations of existing theories to business cycles and economic growth, the author forms a new theory to explain cyclic economic growth. According to this theory, economic recessions result from innovation scarcity, which in turn results from the flawed design of the patent system. The author suggests a new design for the patent system and envisions that the new design would bring about large economic and societal changes. Under this new patent system, the synergy of the patent and capital markets would ensure that economic recessions could be avoided and that the economy would grow at the highest speed.
In the modern world, data is a vital asset for any organization, regardless of industry or size. The world is built upon data. However, data without knowledge is useless. The aim of this book, briefly, is to introduce new approaches that can be used to shape and forecast the future by combining the two disciplines of Statistics and Economics.Readers of Modeling and Advanced Techniques in Modern Economics can find valuable information from a diverse group of experts on topics such as finance, econometric models, stochastic financial models and machine learning, and application of models to financial and macroeconomic data.
Tackling the cybersecurity challenge is a matter of survival for society at large. Cyber attacks are rapidly increasing in sophistication and magnitude-and in their destructive potential. New threats emerge regularly, the last few years having seen a ransomware boom and distributed denial-of-service attacks leveraging the Internet of Things. For organisations, the use of cybersecurity risk management is essential in order to manage these threats. Yet current frameworks have drawbacks which can lead to the suboptimal allocation of cybersecurity resources. Cyber insurance has been touted as part of the solution - based on the idea that insurers can incentivize companies to improve their cybersecurity by offering premium discounts - but cyber insurance levels remain limited. This is because companies have difficulty determining which cyber insurance products to purchase, and insurance companies struggle to accurately assess cyber risk and thus develop cyber insurance products. To deal with these challenges, this volume presents new models for cybersecurity risk management, partly based on the use of cyber insurance. It contains: A set of mathematical models for cybersecurity risk management, including (i) a model to assist companies in determining their optimal budget allocation between security products and cyber insurance and (ii) a model to assist insurers in designing cyber insurance products. The models use adversarial risk analysis to account for the behavior of threat actors (as well as the behavior of companies and insurers). To inform these models, we draw on psychological and behavioural economics studies of decision-making by individuals regarding cybersecurity and cyber insurance. We also draw on organizational decision-making studies involving cybersecurity and cyber insurance. Its theoretical and methodological findings will appeal to researchers across a wide range of cybersecurity-related disciplines including risk and decision analysis, analytics, technology management, actuarial sciences, behavioural sciences, and economics. The practical findings will help cybersecurity professionals and insurers enhance cybersecurity and cyber insurance, thus benefiting society as a whole. This book grew out of a two-year European Union-funded project under Horizons 2020, called CYBECO (Supporting Cyber Insurance from a Behavioral Choice Perspective).
Applied data-centric social sciences aim to develop both methodology and practical applications of various fields of sciences and businesses with rich data. Specifically, in the social sciences, a vast amount of data on human activities may be useful for understanding collective human nature. In this book, the author introduces several mathematical techniques for handling a huge volume of data and analyzing collective human behavior. The book is constructed from data-oriented investigation, with mathematical methods and expressions used for dealing with data for several specific problems. The fundamental philosophy underlying the book is that both mathematical and physical concepts are determined by the purposes of data analysis. This philosophy is shown throughout exemplar studies of several fields in socio-economic systems. From a data-centric point of view, the author proposes a concept that may change people s minds and cause them to start thinking from the basis of data. Several goals underlie the chapters of the book. The first is to describe mathematical and statistical methods for data analysis, and toward that end the author delineates methods with actual data in each chapter. The second is to find a cyber-physical link between data and data-generating mechanisms, as data are always provided by some kind of data-generating process in the real world. The third goal is to provide an impetus for the concepts and methodology set forth in this book to be applied to socio-economic systems."
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
Showcasing fuzzy set theory, this book highlights the enormous potential of fuzzy logic in helping to analyse the complexity of a wide range of socio-economic patterns and behaviour. The contributions to this volume explore the most up-to-date fuzzy-set methods for the measurement of socio-economic phenomena in a multidimensional and/or dynamic perspective. Thus far, fuzzy-set theory has primarily been utilised in the social sciences in the field of poverty measurement. These chapters examine the latest work in this area, while also exploring further applications including social exclusion, the labour market, educational mismatch, sustainability, quality of life and violence against women. The authors demonstrate that real-world situations are often characterised by imprecision, uncertainty and vagueness, which cannot be properly described by the classical set theory which uses a simple true-false binary logic. By contrast, fuzzy-set theory has been shown to be a powerful tool for describing the multidimensionality and complexity of social phenomena. This book will be of significant interest to economists, statisticians and sociologists utilising quantitative methods to explore socio-economic phenomena.
This study examines the determinants of current account, export market share and exchange rates. The author identifies key determinants using Bayesian Model Averaging, which allows evaluation of probability that each variable is in fact a determinant of the analysed competitiveness measure. The main implication of the results presented in the study is that increasing international competitiveness is a gradual process that requires institutional and technological changes rather than short-term adjustments in relative prices.
This volume collects seven of Marc Nerlove's previously published, classic essays on panel data econometrics written over the past thirty-five years, together with a cogent essay on the history of the subject, which began with George Biddell Airey's monograph published in 1861. Since Professor Nerlove's 1966 Econometrica paper with Pietro Balestra, panel data and methods of econometric analysis appropriate to such data have become increasingly important in the discipline. The principal factors in the research environment affecting the future course of panel data econometrics are the phenomenal growth in the computational power available to the individual researcher at his or her desktop and the ready availability of data sets, both large and small, via the Internet. The best way to formulate statistical models for inference is motivated and shaped by substantive problems and understanding of the processes generating the data at hand to resolve them. The essays illustrate both the role of the substantive context in shaping appropriate methods of inference and the increasing importance of computer-intensive methods.
This book addresses the functioning of financial markets, in particular the financial market model, and modelling. More specifically, the book provides a model of adaptive preference in the financial market, rather than the model of the adaptive financial market, which is mostly based on Popper's objective propensity for the singular, i.e., unrepeatable, event. As a result, the concept of preference, following Simon's theory of satisficing, is developed in a logical way with the goal of supplying a foundation for a robust theory of adaptive preference in financial market behavior. The book offers new insights into financial market logic, and psychology: 1) advocating for the priority of behavior over information - in opposition to traditional financial market theories; 2) constructing the processes of (co)evolution adaptive preference-financial market using the concept of fetal reaction norms - between financial market and adaptive preference; 3) presenting a new typology of information in the financial market, aimed at proving point (1) above, as well as edifying an explicative mechanism of the evolutionary nature and behavior of the (real) financial market; 4) presenting sufficient, and necessary, principles or assumptions for developing a theory of adaptive preference in the financial market; and 5) proposing a new interpretation of the pair genotype-phenotype in the financial market model. The book's distinguishing feature is its research method, which is mainly logically rather than historically or empirically based. As a result, the book is targeted at generating debate about the best and most scientifically beneficial method of approaching, analyzing, and modelling financial markets.
Predicting foreign exchange rates has presented a long-standing challenge for economists. However, the recent advances in computational techniques, statistical methods, newer datasets on emerging market currencies, etc., offer some hope. While we are still unable to beat a driftless random walk model, there has been serious progress in the field. This book provides an in-depth assessment of the use of novel statistical approaches and machine learning tools in predicting foreign exchange rate movement. First, it offers a historical account of how exchange rate regimes have evolved over time, which is critical to understanding turning points in a historical time series. It then presents an overview of the previous attempts at modeling exchange rates, and how different methods fared during this process. At the core sections of the book, the author examines the time series characteristics of exchange rates and how contemporary statistics and machine learning can be useful in improving predictive power, compared to previous methods used. Exchange rate determination is an active research area, and this book will appeal to graduate-level students of international economics, international finance, open economy macroeconomics, and management. The book is written in a clear, engaging, and straightforward way, and will greatly improve access to this much-needed knowledge in the field.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
This book has two components: stochastic dynamics and stochastic random combinatorial analysis. The first discusses evolving patterns of interactions of a large but finite number of agents of several types. Changes of agent types or their choices or decisions over time are formulated as jump Markov processes with suitably specified transition rates: optimisations by agents make these rates generally endogenous. Probabilistic equilibrium selection rules are also discussed, together with the distributions of relative sizes of the bases of attraction. As the number of agents approaches infinity, we recover deterministic macroeconomic relations of more conventional economic models. The second component analyses how agents form clusters of various sizes. This has applications for discussing sizes or shares of markets by various agents which involve some combinatorial analysis patterned after the population genetics literature. These are shown to be relevant to distributions of returns to assets, volatility of returns, and power laws. |
You may like...
DRUG ACTION HAEMODYNAMICS AND IMMUNE…
M.J. Parnham, Jacques Bruinvels, …
Hardcover
R3,959
Discovery Miles 39 590
Freedom's Rush - Tales from the Biker…
Kinn Foster, Foster Kinn
Hardcover
R750
Discovery Miles 7 500
Cells of the Immune System
Ota Fuchs, Seyyed Shamsadin Athari
Hardcover
R3,100
Discovery Miles 31 000
|