![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
The contents of this volume comprise the proceedings of the conference, "Equilibrium theory and applications." Some of the recent developments in general equilibrium theory in the perspective of actual and potential applications are presented. The conference was organized in honor of Jacques Drèze on the occasion of his sixtieth birthday. Held at C.O.R.E., it was also the unanimous recognition, stressed by Gérard Debreu in his Address, of his role as "the architect and builder" of the Center for Operations Research and Econometrics. An introductory address by Gérard Debreu comprises Part 1 of the volume. The rest of the volume is divided into four parts spanning the scope of the conference. Part 2 is on incomplete markets, increasing returns, and information, Part 3 on equilibrium and dynamices, Part 4 on employment, imperfect competition, and macroeconomics, and Part 5 on applied general equilibrium models.
This collection of papers delivered at the Fifth International Symposium in Economic Theory and Econometrics in 1988 is devoted to the estimation and testing of models that impose relatively weak restrictions on the stochastic behaviour of data. Particularly in highly non-linear models, empirical results are very sensitive to the choice of the parametric form of the distribution of the observable variables, and often nonparametric and semiparametric models are a preferable alternative. Methods and applications that do not require string parametric assumptions for their validity, that are based on kernels and on series expansions, and methods for independent and dependent observations are investigated and developed in these essays by renowned econometricians.
The 2008 credit crisis started with the failure of one large bank: Lehman Brothers. Since then the focus of both politicians and regulators has been on stabilising the economy and preventing future financial instability. At this juncture, we are at the last stage of future-proofing the financial sector by raising capital requirements and tightening financial regulation. Now the policy agenda needs to concentrate on transforming the banking sector into an engine for growth. Reviving competition in the banking sector after the state interventions of the past years is a key step in this process. This book introduces and explains a relatively new concept in competition measurement: the performance-conduct-structure (PCS) indicator. The key idea behind this measure is that a firm's efficiency is more highly rewarded in terms of market share and profit, the stronger competitive pressure is. The book begins by explaining the financial market's fundamental obstacles to competition presenting a brief survey of the complex relationship between financial stability and competition. The theoretical contributions of Hay and Liu and Boone provide the theoretical underpinning for the PCS indicator, while its application to banking and insurance illustrates its empirical qualities. Finally, this book presents a systematic comparison between the results of this approach and (all) existing methods as applied to 46 countries, over the same sample period. This book presents a comprehensive overview of the knowns and unknowns of financial sector competition for commercial and central bankers, policy-makers, supervisors and academics alike.
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. In particular, the GAMLSS statistical framework enables flexible regression and smoothing models to be fitted to the data. The GAMLSS model assumes that the response variable has any parametric (continuous, discrete or mixed) distribution which might be heavy- or light-tailed, and positively or negatively skewed. In addition, all the parameters of the distribution (location, scale, shape) can be modelled as linear or smooth functions of explanatory variables. Key Features: Provides a broad overview of flexible regression and smoothing techniques to learn from data whilst also focusing on the practical application of methodology using GAMLSS software in R. Includes a comprehensive collection of real data examples, which reflect the range of problems addressed by GAMLSS models and provide a practical illustration of the process of using flexible GAMLSS models for statistical learning. R code integrated into the text for ease of understanding and replication. Supplemented by a website with code, data and extra materials. This book aims to help readers understand how to learn from data encountered in many fields. It will be useful for practitioners and researchers who wish to understand and use the GAMLSS models to learn from data and also for students who wish to learn GAMLSS through practical examples.
This book presents estimates of the sources of economic growth in Canada. The experimental measures account for the reproducibility of capital inputs in an input-output framework and show that advances in technology are more important for economic growth than previously estimated. Traditional measures of multifactor productivity advance are also presented. Extensive comparisons relate the two approaches to each change and labour productivity. The book will be of interest to macroeconomists studying economic growth, capital accumulation, technical advance, growth accounting, and input-output analysis.
This book addresses the disparities that arise when measuring and modeling societal behavior and progress across the social sciences. It looks at why and how different disciplines and even researchers can use the same data and yet come to different conclusions about equality of opportunity, economic and social mobility, poverty and polarization, and conflict and segregation. Because societal behavior and progress exist only in the context of other key aspects, modeling becomes exponentially more complex as more of these aspects are factored into considerations. The content of this book transcends disciplinary boundaries, providing valuable information on measuring and modeling to economists, sociologists, and political scientists who are interested in data-based analysis of pressing social issues.
Oil and gas industries apply several techniques for assessing and mitigating the risks that are inherent in its operations. In this context, the application of Bayesian Networks (BNs) to risk assessment offers a different probabilistic version of causal reasoning. Introducing probabilistic nature of hazards, conditional probability and Bayesian thinking, it discusses how cause and effect of process hazards can be modelled using BNs and development of large BNs from basic building blocks. Focus is on development of BNs for typical equipment in industry including accident case studies and its usage along with other conventional risk assessment methods. Aimed at professionals in oil and gas industry, safety engineering, risk assessment, this book Brings together basics of Bayesian theory, Bayesian Networks and applications of the same to process safety hazards and risk assessment in the oil and gas industry Presents sequence of steps for setting up the model, populating the model with data and simulating the model for practical cases in a systematic manner Includes a comprehensive list on sources of failure data and tips on modelling and simulation of large and complex networks Presents modelling and simulation of loss of containment of actual equipment in oil and gas industry such as Separator, Storage tanks, Pipeline, Compressor and risk assessments Discusses case studies to demonstrate the practicability of use of Bayesian Network in routine risk assessments
To fully function in today's global real estate industry, students and professionals increasingly need to understand how to implement essential and cutting-edge quantitative techniques. This book presents an easy-to-read guide to applying quantitative analysis in real estate aimed at non-cognate undergraduate and masters students, and meets the requirements of modern professional practice. Through case studies and examples illustrating applications using data sourced from dedicated real estate information providers and major firms in the industry, the book provides an introduction to the foundations underlying statistical data analysis, common data manipulations and understanding descriptive statistics, before gradually building up to more advanced quantitative analysis, modelling and forecasting of real estate markets. Our examples and case studies within the chapters have been specifically compiled for this book and explicitly designed to help the reader acquire a better understanding of the quantitative methods addressed in each chapter. Our objective is to equip readers with the skills needed to confidently carry out their own quantitative analysis and be able to interpret empirical results from academic work and practitioner studies in the field of real estate and in other asset classes. Both undergraduate and masters level students, as well as real estate analysts in the professions, will find this book to be essential reading.
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics. The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models. All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.
This proceedings volume presents the latest scientific research and trends in experimental economics, with particular focus on neuroeconomics. Derived from the 2016 Computational Methods in Experimental Economics (CMEE) conference held in Szczecin, Poland, this book features research and analysis of novel computational methods in neuroeconomics. Neuroeconomics is an interdisciplinary field that combines neuroscience, psychology and economics to build a comprehensive theory of decision making. At its core, neuroeconomics analyzes the decision-making process not only in terms of external conditions or psychological aspects, but also from the neuronal point of view by examining the cerebral conditions of decision making. The application of IT enhances the possibilities of conducting such analyses. Such studies are now performed by software that provides interaction among all the participants and possibilities to register their reactions more accurately. This book examines some of these applications and methods. Featuring contributions on both theory and application, this book is of interest to researchers, students, academics and professionals interested in experimental economics, neuroeconomics and behavioral economics.
Financial crises often transmit across geographical borders and different asset classes. Modeling these interactions is empirically challenging, and many of the proposed methods give different results when applied to the same data sets. In this book the authors set out their work on a general framework for modeling the transmission of financial crises using latent factor models. They show how their framework encompasses a number of other empirical contagion models and why the results between the models differ. The book builds a framework which begins from considering contagion in the bond markets during 1997-1998 across a number of countries, and culminates in a model which encompasses multiple assets across multiple countries through over a decade of crisis events from East Asia in 1997-1998 to the sub prime crisis during 2008. Program code to support implementation of similar models is available.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
Market Analysis for Real Estate is a comprehensive introduction to how real estate markets work and the analytical tools and techniques that can be used to identify and interpret market signals. The markets for space and varied property assets, including residential, office, retail, and industrial, are presented, analyzed, and integrated into a complete understanding of the role of real estate markets within the workings of contemporary urban economies. Unlike other books on market analysis, the economic and financial theory in this book is rigorous and well integrated with the specifics of the real estate market. Furthermore, it is thoroughly explained as it assumes no previous coursework in economics or finance on the part of the reader. The theoretical discussion is backed up with numerous real estate case study examples and problems, which are presented throughout the text to assist both student and teacher. Including discussion questions, exercises, several web links, and online slides, this textbook is suitable for use on a variety of degree programs in real estate, finance, business, planning, and economics at undergraduate and MSc/MBA level. It is also a useful primer for professionals in these disciplines.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.
This volume of Advances in Econometrics focuses on recent developments in the use of structural econometric models in empirical economics. The papers in this volume are divided in to three broad groups. The first part looks at recent developments in the estimation of dynamic discrete choice models. This includes using new estimation methods for these models based on Euler equations, estimation using sieve approximation of high dimensional state space, the identification of Markov dynamic games with persistent unobserved state variables and developing test of monotone comparative static in models of multiple equilibria. The second part looks at recent advances in the area empirical matching models. The papers in this section look at developing estimators for matching models based on stability conditions, estimating matching surplus functions using generalized entropy functions, solving for the fixed point in the Choo-Siow matching model using a contraction mapping formulation. While the issue of incomplete, or partial identification of model parameters is touched upon in some of the foregoing chapters, two chapters focus on this issue, in the context of testing for monotone comparative statics in models with multiple equilibria, and estimation of supermodular games under the restrictions that players' strategies be rationalizable. The last group of three papers looks at empirical applications using structural econometric models. Two applications applies matching models to solve endogenous matching to the loan spread equation and to endogenize marriage in the collective model of intrahousehold allocation. Another applications looks at market power of condominium developers in the Japanese housing market in the 1990s.
Master key spreadsheet and business analytics skills with SPREADSHEET MODELING AND DECISION ANALYSIS: A PRACTICAL INTRODUCTION TO BUSINESS ANALYTICS, 9E, written by respected business analytics innovator Cliff Ragsdale. This edition's clear presentation, realistic examples, fascinating topics and valuable software provide everything you need to become proficient in today's most widely used business analytics techniques using the latest version of Excel (R) in Microsoft (R) Office 365 or Office 2019. Become skilled in the newest Excel functions as well as Analytic Solver (R) and Data Mining add-ins. This edition helps you develop both algebraic and spreadsheet modeling skills. Step-by-step instructions and annotated, full-color screen images make examples easy to follow and show you how to apply what you learn about descriptive, predictive and prescriptive analytics to real business situations. WebAssign online tools and author-created videos further strengthen understanding.
The contents of this volume comprise the proceedings of the International Symposia in Economic Theory and Econometrics conference held in 1987 at the IC^T2 (Innovation, Creativity, and Capital) Institute at the University of Texas at Austin. The essays present fundamental new research on the analysis of complicated outcomes in relatively simple macroeconomic models. The book covers econometric modelling and time series analysis techniques in five parts. Part I focuses on sunspot equilibria, the study of uncertainty generated by nonstochastic economic models. Part II examines the more traditional examples of deterministic chaos: bubbles, instability, and hyperinflation. Part III contains the most current literature dealing with empirical tests for chaos and strange attractors. Part IV deals with chaos and informational complexity. Part V, Nonlinear Econometric Modelling, includes tests for and applications of nonlinearity.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
This work examines theoretical issues, as well as practical developments in statistical inference related to econometric models and analysis. This work offers discussions on such areas as the function of statistics in aggregation, income inequality, poverty, health, spatial econometrics, panel and survey data, bootstrapping and time series.
Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data. The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational material for the remaining chapters, which cover the construction of structural models and the extension of vector autoregressive modeling to high frequency, continuously recorded, and irregularly sampled series. The final chapter combines these approaches with spectral methods for identifying causal dependence between time series. Web ResourceA supplementary website provides the data sets used in the examples as well as documented MATLAB (R) functions and other code for analyzing the examples and producing the illustrations. The site also offers technical details on the estimation theory and methods and the implementation of the models.
The state-space approach provides a formal framework where any result or procedure developed for a basic model can be seamlessly applied to a standard formulation written in state-space form. Moreover, it can accommodate with a reasonable effort nonstandard situations, such as observation errors, aggregation constraints, or missing in-sample values. Exploring the advantages of this approach, State-Space Methods for Time Series Analysis: Theory, Applications and Software presents many computational procedures that can be applied to a previously specified linear model in state-space form. After discussing the formulation of the state-space model, the book illustrates the flexibility of the state-space representation and covers the main state estimation algorithms: filtering and smoothing. It then shows how to compute the Gaussian likelihood for unknown coefficients in the state-space matrices of a given model before introducing subspace methods and their application. It also discusses signal extraction, describes two algorithms to obtain the VARMAX matrices corresponding to any linear state-space model, and addresses several issues relating to the aggregation and disaggregation of time series. The book concludes with a cross-sectional extension to the classical state-space formulation in order to accommodate longitudinal or panel data. Missing data is a common occurrence here, and the book explains imputation procedures necessary to treat missingness in both exogenous and endogenous variables. Web Resource The authors' E4 MATLAB (R) toolbox offers all the computational procedures, administrative and analytical functions, and related materials for time series analysis. This flexible, powerful, and free software tool enables readers to replicate the practical examples in the text and apply the procedures to their own work.
High-Performance Computing for Big Data: Methodologies and Applications explores emerging high-performance architectures for data-intensive applications, novel efficient analytical strategies to boost data processing, and cutting-edge applications in diverse fields, such as machine learning, life science, neural networks, and neuromorphic engineering. The book is organized into two main sections. The first section covers Big Data architectures, including cloud computing systems, and heterogeneous accelerators. It also covers emerging 3D IC design principles for memory architectures and devices. The second section of the book illustrates emerging and practical applications of Big Data across several domains, including bioinformatics, deep learning, and neuromorphic engineering. Features Covers a wide range of Big Data architectures, including distributed systems like Hadoop/Spark Includes accelerator-based approaches for big data applications such as GPU-based acceleration techniques, and hardware acceleration such as FPGA/CGRA/ASICs Presents emerging memory architectures and devices such as NVM, STT- RAM, 3D IC design principles Describes advanced algorithms for different big data application domains Illustrates novel analytics techniques for Big Data applications, scheduling, mapping, and partitioning methodologies Featuring contributions from leading experts, this book presents state-of-the-art research on the methodologies and applications of high-performance computing for big data applications. About the Editor Dr. Chao Wang is an Associate Professor in the School of Computer Science at the University of Science and Technology of China. He is the Associate Editor of ACM Transactions on Design Automations for Electronics Systems (TODAES), Applied Soft Computing, Microprocessors and Microsystems, IET Computers & Digital Techniques, and International Journal of Electronics. Dr. Chao Wang was the recipient of Youth Innovation Promotion Association, CAS, ACM China Rising Star Honorable Mention (2016), and best IP nomination of DATE 2015. He is now on the CCF Technical Committee on Computer Architecture, CCF Task Force on Formal Methods. He is a Senior Member of IEEE, Senior Member of CCF, and a Senior Member of ACM. |
You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Spatial Analysis Using Big Data…
Yoshiki Yamagata, Hajime Seya
Paperback
R3,021
Discovery Miles 30 210
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, …
Hardcover
R5,455
Discovery Miles 54 550
|