Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics
Dynamic Programming in Economics is an outgrowth of a course intended for students in the first year PhD program and for researchers in Macroeconomics Dynamics. It can be used by students and researchers in Mathematics as well as in Economics. The purpose of Dynamic Programming in Economics is twofold: (a) to provide a rigorous, but not too complicated, treatment of optimal growth models in infinite discrete time horizon, (b) to train the reader to the use of optimal growth models and hence to help him to go further in his research. We are convinced that there is a place for a book which stays somewhere between the "minimum tool kit" and specialized monographs leading to the frontiers of research on optimal growth.
The productivity of a business exerts an important influence on its financial performance. A similar influence exists for industries and economies: those with superior productivity performance thrive at the expense of others. Productivity performance helps explain the growth and demise of businesses and the relative prosperity of nations. Productivity Accounting: The Economics of Business Performance offers an in-depth analysis of variation in business performance, providing the reader with an analytical framework within which to account for this variation and its causes and consequences. The primary focus is the individual business, and the principal consequence of business productivity performance is business financial performance. Alternative measures of financial performance are considered, including profit, profitability, cost, unit cost, and return on assets. Combining analytical rigor with empirical illustrations, the analysis draws on wide-ranging literatures, both historical and current, from business and economics, and explains how businesses create value and distribute it.
It is very useful and timely book as demand forecasting has become a very crucial tool and provides important information for destination on which policy are created and implemented. This is especially important given the complexities arising the aftermath of the Covid19 pandemic. * It looks at novel and recent developments in this field including judgement and scenario forecasting. * Offers a comprehensive approach to tourism econometrics, looking at a variety of aspects. * The authors are experts in this field and of the highest academic calibre.
In 1956, Solow proposed a neoclassical growth model in opposition or as an alternative to Keynesian growth models. The Solow model of economic growth provided foundations for models embedded in the new theory of economic growth, known as the theory of endogenous growth, such as the renowned growth models developed by Paul M. Romer and Robert E. Lucas in the 1980s and 90s. The augmentations of the Solow model described in this book, excepting the Phelps golden rules of capital accumulation and the Mankiw-Romer-Weil and Nonneman-Vanhoudt models, were developed by the authors over the last two decades. The book identifies six spheres of interest in modern macroeconomic theory: the impact of fiscal and monetary policy on growth; the effect of different returns to scale on production; the influence of mobility of factors of production among different countries on their development; the effect of population dynamics on growth; the periodicity of investment rates and their influence on growth; and the effect of exogenous shocks in the form of an epidemic. For each of these issues, the authors construct and analyze an appropriate growth model that focuses on the description of the specific macroeconomic problem. This book not only continues the neoclassical tradition of thought in economics focused on quantitative economic change but also, and to a significant extent, discusses alternative approaches to certain questions of economic growth, utilizing conclusions that can be drawn from the Solow model. It is a useful tool in analyzing contemporary issues related to growth.
This title is concerned with the investigation of the contemporary financial issues of the e-commerce market.
Generalized Method of Moments (GMM) has become one of the main statistical tools for the analysis of economic and financial data. This book is the first to provide an intuitive introduction to the method combined with a unified treatment of GMM statistical theory and a survey of recent important developments in the field. Providing a comprehensive treatment of GMM estimation and inference, it is designed as a resource for both the theory and practice of GMM: it discusses and proves formally all the main statistical results, and illustrates all inference techniques using empirical examples in macroeconomics and finance. Building from the instrumental variables estimator in static linear models, it presents the asymptotic statistical theory of GMM in nonlinear dynamic models. Within this framework it covers classical results on estimation and inference techniques, such as the overidentifying restrictions test and tests of structural stability, and reviews the finite sample performance of these inference methods. And it discusses in detail recent developments on covariance matrix estimation, the impact of model misspecification, moment selection, the use of the bootstrap, and weak instrument asymptotics.
Showcasing fuzzy set theory, this book highlights the enormous potential of fuzzy logic in helping to analyse the complexity of a wide range of socio-economic patterns and behaviour. The contributions to this volume explore the most up-to-date fuzzy-set methods for the measurement of socio-economic phenomena in a multidimensional and/or dynamic perspective. Thus far, fuzzy-set theory has primarily been utilised in the social sciences in the field of poverty measurement. These chapters examine the latest work in this area, while also exploring further applications including social exclusion, the labour market, educational mismatch, sustainability, quality of life and violence against women. The authors demonstrate that real-world situations are often characterised by imprecision, uncertainty and vagueness, which cannot be properly described by the classical set theory which uses a simple true-false binary logic. By contrast, fuzzy-set theory has been shown to be a powerful tool for describing the multidimensionality and complexity of social phenomena. This book will be of significant interest to economists, statisticians and sociologists utilising quantitative methods to explore socio-economic phenomena.
Master key spreadsheet and business analytics skills with SPREADSHEET MODELING AND DECISION ANALYSIS: A PRACTICAL INTRODUCTION TO BUSINESS ANALYTICS, 9E, written by respected business analytics innovator Cliff Ragsdale. This edition's clear presentation, realistic examples, fascinating topics and valuable software provide everything you need to become proficient in today's most widely used business analytics techniques using the latest version of Excel (R) in Microsoft (R) Office 365 or Office 2019. Become skilled in the newest Excel functions as well as Analytic Solver (R) and Data Mining add-ins. This edition helps you develop both algebraic and spreadsheet modeling skills. Step-by-step instructions and annotated, full-color screen images make examples easy to follow and show you how to apply what you learn about descriptive, predictive and prescriptive analytics to real business situations. WebAssign online tools and author-created videos further strengthen understanding.
DEA is computational at its core and this book will be one of several books that we will look to publish on the computational aspects of DEA. This book by Zhu and Cook will deal with the micro aspects of handling and modeling data issues in modeling DEA problems. DEA's use has grown with its capability of dealing with complex service industry and the public service domain types of problems that require modeling both qualitative and quantitative data. This will be a handbook treatment dealing with specific data problems including the following: (1) imprecise data, (2) inaccurate data, (3) missing data, (4) qualitative data, (5) outliers, (6) undesirable outputs, (7) quality data, (8) statistical analysis, (9) software and other data aspects of modeling complex DEA problems. In addition, the book will demonstrate how to visualize DEA results when the data is more than 3-dimensional, and how to identify efficiency units quickly and accurately.
Does game theory ? the mathematical theory of strategic interaction ? provide genuine explanations of human behaviour? Can game theory be used in economic consultancy or other normative contexts? Explaining Games: The Epistemic Programme in Game Theory ? the first monograph on the philosophy of game theory ? is a bold attempt to combine insights from epistemic logic and the philosophy of science to investigate the applicability of game theory in such fields as economics, philosophy and strategic consultancy. De Bruin proves new mathematical theorems about the beliefs, desires and rationality principles of individual human beings, and he explores in detail the logical form of game theory as it is used in explanatory and normative contexts. He argues that game theory reduces to rational choice theory if used as an explanatory device, and that game theory is nonsensical if used as a normative device. A provocative account of the history of game theory reveals that this is not bad news for all of game theory, though. Two central research programmes in game theory tried to find the ultimate characterisation of strategic interaction between rational agents. Yet, while the Nash Equilibrium Refinement Programme has done badly thanks to such research habits as overmathematisation, model-tinkering and introversion, the Epistemic Programme, De Bruin argues, has been rather successful in achieving this aim.
This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular b ut are theoretically challenging when based on conventional methods. Among others, it introduces a modern text-mining method called dynamic topic modeling in detail and applies it to the message board of Bitcoins. The unique synthesis of theory and practice supported by computational tools is reflected not only in the selection of topics, but also in the fine balance of scientific contributions on practical implementation and theoretical concepts. This link between theory and practice offers theoreticians insights into considerations of applicability and, vice versa, provides practitioners convenient access to new techniques in quantitative finance. Hence the book will appeal both to researchers, including master and PhD students, and practitioners, such as financial engineers. The results presented in the book are fully reproducible and all quantlets needed for calculations are provided on an accompanying website. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allows readers to reproduce the tables, pictures and calculations inside this Springer book.
Predicting foreign exchange rates has presented a long-standing challenge for economists. However, the recent advances in computational techniques, statistical methods, newer datasets on emerging market currencies, etc., offer some hope. While we are still unable to beat a driftless random walk model, there has been serious progress in the field. This book provides an in-depth assessment of the use of novel statistical approaches and machine learning tools in predicting foreign exchange rate movement. First, it offers a historical account of how exchange rate regimes have evolved over time, which is critical to understanding turning points in a historical time series. It then presents an overview of the previous attempts at modeling exchange rates, and how different methods fared during this process. At the core sections of the book, the author examines the time series characteristics of exchange rates and how contemporary statistics and machine learning can be useful in improving predictive power, compared to previous methods used. Exchange rate determination is an active research area, and this book will appeal to graduate-level students of international economics, international finance, open economy macroeconomics, and management. The book is written in a clear, engaging, and straightforward way, and will greatly improve access to this much-needed knowledge in the field.
JEAN-FRANQOIS MERTENS This book presents a systematic exposition of the use of game theoretic methods in general equilibrium analysis. Clearly the first such use was by Arrow and Debreu, with the "birth" of general equi librium theory itself, in using Nash's existence theorem (or a generalization) to prove the existence of a competitive equilibrium. But this use appeared possibly to be merely tech nical, borrowing some tools for proving a theorem. This book stresses the later contributions, were game theoretic concepts were used as such, to explain various aspects of the general equilibrium model. But clearly, each of those later approaches also provides per sea game theoretic proof of the existence of competitive equilibrium. Part A deals with the first such approach: the equality between the set of competitive equilibria of a perfectly competitive (i.e., every trader has negligible market power) economy and the core of the corresponding cooperative game."
The Who, What, and Where of America is designed to provide a sampling of key demographic information. It covers the United States, every state, each metropolitan statistical area, and all the counties and cities with a population of 20,000 or more. Who: Age, Race and Ethnicity, and Household Structure What: Education, Employment, and Income Where: Migration, Housing, and Transportation Each part is preceded by highlights and ranking tables that show how areas diverge from the national norm. These research aids are invaluable for understanding data from the ACS and for highlighting what it tells us about who we are, what we do, and where we live. Each topic is divided into four tables revealing the results of the data collected from different types of geographic areas in the United States, generally with populations greater than 20,000. Table A. States Table B. Counties Table C. Metropolitan Areas Table D. Cities In this edition, you will find social and economic estimates on the ways American communities are changing with regard to the following: Age and race Health care coverage Marital history Education attainment Income and occupation Commute time to work Employment status Home values and monthly costs Veteran status Size of home or rental unit This title is the latest in the County and City Extra Series of publications from Bernan Press. Other titles include County and City Extra, County and City Extra: Special Decennial Census Edition, and Places, Towns, and Townships.
The Handbook of U.S. Labor Statistics is recognized as an authoritative resource on the U.S. labor force. It continues and enhances the Bureau of Labor Statistics's (BLS) discontinued publication, Labor Statistics. It allows the user to understand recent developments as well as to compare today's economy with past history. This edition examines the impact that COVID-19 had on the labor market throughout 2020. Specifically, it discusses the sharp decline in employment, the rise of telework, and information on how Americans used their stimulus payments. In addition, this edition includes a completely updated chapter on prices and inflation. The Handbook is a comprehensive reference providing an abundance of data on a variety of topics including: Employment and unemployment; Earnings; Prices; Productivity; Consumer expenditures; Occupational safety and health; Union membership; Working poor Recent trends in the labor force And much more! Features of the publication In addition to over 215 tables that present practical data, the Handbook provides: Introductory material for each chapter that contains highlights of salient data and figures that call attention to noteworthy trends in the data Notes and definitions, which contain concise descriptions of the data sources, concepts, definitions, and methodology from which the data are derived References to more comprehensive reports which provide additional data and more extensive descriptions of estimation methods, sampling, and reliability measures
Market Analysis for Real Estate is a comprehensive introduction to how real estate markets work and the analytical tools and techniques that can be used to identify and interpret market signals. The markets for space and varied property assets, including residential, office, retail, and industrial, are presented, analyzed, and integrated into a complete understanding of the role of real estate markets within the workings of contemporary urban economies. Unlike other books on market analysis, the economic and financial theory in this book is rigorous and well integrated with the specifics of the real estate market. Furthermore, it is thoroughly explained as it assumes no previous coursework in economics or finance on the part of the reader. The theoretical discussion is backed up with numerous real estate case study examples and problems, which are presented throughout the text to assist both student and teacher. Including discussion questions, exercises, several web links, and online slides, this textbook is suitable for use on a variety of degree programs in real estate, finance, business, planning, and economics at undergraduate and MSc/MBA level. It is also a useful primer for professionals in these disciplines.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
Students in both social and natural sciences often seek regression methods to explain the frequency of events, such as visits to a doctor, auto accidents, or new patents awarded. This book provides the most comprehensive and up-to-date account of models and methods to interpret such data. The authors have conducted research in the field for more than twenty-five years. In this book, they combine theory and practice to make sophisticated methods of analysis accessible to researchers and practitioners working with widely different types of data and software in areas such as applied statistics, econometrics, marketing, operations research, actuarial studies, demography, biostatistics, and quantitative social sciences. The book may be used as a reference work on count models or by students seeking an authoritative overview. Complementary material in the form of data sets, template programs, and bibliographic resources can be accessed on the Internet through the authors' homepages. This second edition is an expanded and updated version of the first, with new empirical examples and more than one hundred new references added. The new material includes new theoretical topics, an updated and expanded treatment of cross-section models, coverage of bootstrap-based and simulation-based inference, expanded treatment of time series, multivariate and panel data, expanded treatment of endogenous regressors, coverage of quantile count regression, and a new chapter on Bayesian methods.
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics. The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models. All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.
This is the first book that examines the diverse range of experimental methods currently being used in the social sciences, gathering contributions by working economists engaged in experimentation, as well as by a political scientist, psychologists and philosophers of the social sciences. Until the mid-twentieth century, most economists believed that experiments in the economic sciences were impossible. But that's hardly the case today, as evinced by the fact that Vernon Smith, an experimental economist, and Daniel Kahneman, a behavioral economist, won the Nobel Prize in Economics in 2002. However, the current use of experimental methods in economics is more diverse than is usually assumed. As the concept of experimentation underwent considerable abstraction throughout the twentieth century, the areas of the social sciences in which experiments are applied are expanding, creating renewed interest in, and multifaceted debates on, the way experimental methods are used. This book sheds new light on the diversity of experimental methodologies used in the social sciences. The topics covered include historical insights into the evolution of experimental methods; the necessary "performativity" of experiments, i.e., the dynamic interaction with the social contexts in which they are embedded; the application of causal inferences in the social sciences; a comparison of laboratory, field, and natural experiments; and the recent use of randomized controlled trials (RCTs) in development economics. Several chapters also deal with the latest heated debates, such as those concerning the use of the random lottery method in laboratory experiments.
Ranking of Multivariate Populations: A Permutation Approach with Applications presents a novel permutation-based nonparametric approach for ranking several multivariate populations. Using data collected from both experimental and observation studies, it covers some of the most useful designs widely applied in research and industry investigations, such as multivariate analysis of variance (MANOVA) and multivariate randomized complete block (MRCB) designs. The first section of the book introduces the topic of ranking multivariate populations by presenting the main theoretical ideas and an in-depth literature review. The second section discusses a large number of real case studies from four specific research areas: new product development in industry, perceived quality of the indoor environment, customer satisfaction, and cytological and histological analysis by image processing. A web-based nonparametric combination global ranking software is also described. Designed for practitioners and postgraduate students in statistics and the applied sciences, this application-oriented book offers a practical guide to the reliable global ranking of multivariate items, such as products, processes, and services, in terms of the performance of all investigated products/prototypes.
This book addresses the functioning of financial markets, in particular the financial market model, and modelling. More specifically, the book provides a model of adaptive preference in the financial market, rather than the model of the adaptive financial market, which is mostly based on Popper's objective propensity for the singular, i.e., unrepeatable, event. As a result, the concept of preference, following Simon's theory of satisficing, is developed in a logical way with the goal of supplying a foundation for a robust theory of adaptive preference in financial market behavior. The book offers new insights into financial market logic, and psychology: 1) advocating for the priority of behavior over information - in opposition to traditional financial market theories; 2) constructing the processes of (co)evolution adaptive preference-financial market using the concept of fetal reaction norms - between financial market and adaptive preference; 3) presenting a new typology of information in the financial market, aimed at proving point (1) above, as well as edifying an explicative mechanism of the evolutionary nature and behavior of the (real) financial market; 4) presenting sufficient, and necessary, principles or assumptions for developing a theory of adaptive preference in the financial market; and 5) proposing a new interpretation of the pair genotype-phenotype in the financial market model. The book's distinguishing feature is its research method, which is mainly logically rather than historically or empirically based. As a result, the book is targeted at generating debate about the best and most scientifically beneficial method of approaching, analyzing, and modelling financial markets.
This book provides an introduction to the technical background of
unit root testing, one of the most heavily researched areas in
econometrics over the last twenty years. Starting from an
elementary understanding of probability and time series, it
develops the key concepts necessary to understand the structure of
random walks and brownian motion, and their role in tests for a
unit root. The techniques are illustrated with worked examples,
data and programs available on the book's website, which includes
more numerical and theoretical examples
Mastering the basic concepts of mathematics is the key to understanding other subjects such as Economics, Finance, Statistics, and Accounting. Mathematics for Finance, Business and Economics is written informally for easy comprehension. Unlike traditional textbooks it provides a combination of explanations, exploration and real-life applications of major concepts. Mathematics for Finance, Business and Economics discusses elementary mathematical operations, linear and non-linear functions and equations, differentiation and optimization, economic functions, summation, percentages and interest, arithmetic and geometric series, present and future values of annuities, matrices and Markov chains. Aided by the discussion of real-world problems and solutions, students across the business and economics disciplines will find this textbook perfect for gaining an understanding of a core plank of their studies. |
You may like...
Statistics for Business & Economics…
James McClave, P Benson, …
Paperback
R2,304
Discovery Miles 23 040
Financial and Macroeconomic…
Francis X. Diebold, Kamil Yilmaz
Hardcover
R3,524
Discovery Miles 35 240
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Handbook of Experimental Game Theory
C. M. Capra, Rachel T. A. Croson, …
Hardcover
R6,105
Discovery Miles 61 050
Tax Policy and Uncertainty - Modelling…
Christopher Ball, John Creedy, …
Hardcover
R2,508
Discovery Miles 25 080
Kwantitatiewe statistiese tegnieke
Swanepoel Swanepoel, Vivier Vivier, …
Book
The Mahalanobis Growth Model - A…
Chetan Ghate, Pawan Gopalakrishnan, …
Hardcover
R1,860
Discovery Miles 18 600
|