![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.
This volume of Advances in Econometrics focuses on recent developments in the use of structural econometric models in empirical economics. The papers in this volume are divided in to three broad groups. The first part looks at recent developments in the estimation of dynamic discrete choice models. This includes using new estimation methods for these models based on Euler equations, estimation using sieve approximation of high dimensional state space, the identification of Markov dynamic games with persistent unobserved state variables and developing test of monotone comparative static in models of multiple equilibria. The second part looks at recent advances in the area empirical matching models. The papers in this section look at developing estimators for matching models based on stability conditions, estimating matching surplus functions using generalized entropy functions, solving for the fixed point in the Choo-Siow matching model using a contraction mapping formulation. While the issue of incomplete, or partial identification of model parameters is touched upon in some of the foregoing chapters, two chapters focus on this issue, in the context of testing for monotone comparative statics in models with multiple equilibria, and estimation of supermodular games under the restrictions that players' strategies be rationalizable. The last group of three papers looks at empirical applications using structural econometric models. Two applications applies matching models to solve endogenous matching to the loan spread equation and to endogenize marriage in the collective model of intrahousehold allocation. Another applications looks at market power of condominium developers in the Japanese housing market in the 1990s.
Master key spreadsheet and business analytics skills with SPREADSHEET MODELING AND DECISION ANALYSIS: A PRACTICAL INTRODUCTION TO BUSINESS ANALYTICS, 9E, written by respected business analytics innovator Cliff Ragsdale. This edition's clear presentation, realistic examples, fascinating topics and valuable software provide everything you need to become proficient in today's most widely used business analytics techniques using the latest version of Excel (R) in Microsoft (R) Office 365 or Office 2019. Become skilled in the newest Excel functions as well as Analytic Solver (R) and Data Mining add-ins. This edition helps you develop both algebraic and spreadsheet modeling skills. Step-by-step instructions and annotated, full-color screen images make examples easy to follow and show you how to apply what you learn about descriptive, predictive and prescriptive analytics to real business situations. WebAssign online tools and author-created videos further strengthen understanding.
The contents of this volume comprise the proceedings of the International Symposia in Economic Theory and Econometrics conference held in 1987 at the IC^T2 (Innovation, Creativity, and Capital) Institute at the University of Texas at Austin. The essays present fundamental new research on the analysis of complicated outcomes in relatively simple macroeconomic models. The book covers econometric modelling and time series analysis techniques in five parts. Part I focuses on sunspot equilibria, the study of uncertainty generated by nonstochastic economic models. Part II examines the more traditional examples of deterministic chaos: bubbles, instability, and hyperinflation. Part III contains the most current literature dealing with empirical tests for chaos and strange attractors. Part IV deals with chaos and informational complexity. Part V, Nonlinear Econometric Modelling, includes tests for and applications of nonlinearity.
Complex-Valued Modeling in Economics and Finance outlines the theory, methodology, and techniques behind modeling economic processes using complex variables theory. The theory of complex variables functions is widely used in many scientific fields, since work with complex variables can appropriately describe different complex real-life processes. Many economic indicators and factors reflecting the properties of the same object can be represented in the form of complex variables. By describing the relationship between various indicators using the functions of these variables, new economic and financial models can be created which are often more accurate than the models of real variables. This book pays critical attention to complex variables production in stock market modeling, modeling illegal economy, time series forecasting, complex auto-aggressive models, and economic dynamics modeling. Very little has been published on this topic and its applications within the fields of economics and finance, and this volume appeals to graduate-level students studying economics, academic researchers in economics and finance, and economists.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
This work examines theoretical issues, as well as practical developments in statistical inference related to econometric models and analysis. This work offers discussions on such areas as the function of statistics in aggregation, income inequality, poverty, health, spatial econometrics, panel and survey data, bootstrapping and time series.
Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data. The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational material for the remaining chapters, which cover the construction of structural models and the extension of vector autoregressive modeling to high frequency, continuously recorded, and irregularly sampled series. The final chapter combines these approaches with spectral methods for identifying causal dependence between time series. Web ResourceA supplementary website provides the data sets used in the examples as well as documented MATLAB (R) functions and other code for analyzing the examples and producing the illustrations. The site also offers technical details on the estimation theory and methods and the implementation of the models.
The state-space approach provides a formal framework where any result or procedure developed for a basic model can be seamlessly applied to a standard formulation written in state-space form. Moreover, it can accommodate with a reasonable effort nonstandard situations, such as observation errors, aggregation constraints, or missing in-sample values. Exploring the advantages of this approach, State-Space Methods for Time Series Analysis: Theory, Applications and Software presents many computational procedures that can be applied to a previously specified linear model in state-space form. After discussing the formulation of the state-space model, the book illustrates the flexibility of the state-space representation and covers the main state estimation algorithms: filtering and smoothing. It then shows how to compute the Gaussian likelihood for unknown coefficients in the state-space matrices of a given model before introducing subspace methods and their application. It also discusses signal extraction, describes two algorithms to obtain the VARMAX matrices corresponding to any linear state-space model, and addresses several issues relating to the aggregation and disaggregation of time series. The book concludes with a cross-sectional extension to the classical state-space formulation in order to accommodate longitudinal or panel data. Missing data is a common occurrence here, and the book explains imputation procedures necessary to treat missingness in both exogenous and endogenous variables. Web Resource The authors' E4 MATLAB (R) toolbox offers all the computational procedures, administrative and analytical functions, and related materials for time series analysis. This flexible, powerful, and free software tool enables readers to replicate the practical examples in the text and apply the procedures to their own work.
High-Performance Computing for Big Data: Methodologies and Applications explores emerging high-performance architectures for data-intensive applications, novel efficient analytical strategies to boost data processing, and cutting-edge applications in diverse fields, such as machine learning, life science, neural networks, and neuromorphic engineering. The book is organized into two main sections. The first section covers Big Data architectures, including cloud computing systems, and heterogeneous accelerators. It also covers emerging 3D IC design principles for memory architectures and devices. The second section of the book illustrates emerging and practical applications of Big Data across several domains, including bioinformatics, deep learning, and neuromorphic engineering. Features Covers a wide range of Big Data architectures, including distributed systems like Hadoop/Spark Includes accelerator-based approaches for big data applications such as GPU-based acceleration techniques, and hardware acceleration such as FPGA/CGRA/ASICs Presents emerging memory architectures and devices such as NVM, STT- RAM, 3D IC design principles Describes advanced algorithms for different big data application domains Illustrates novel analytics techniques for Big Data applications, scheduling, mapping, and partitioning methodologies Featuring contributions from leading experts, this book presents state-of-the-art research on the methodologies and applications of high-performance computing for big data applications. About the Editor Dr. Chao Wang is an Associate Professor in the School of Computer Science at the University of Science and Technology of China. He is the Associate Editor of ACM Transactions on Design Automations for Electronics Systems (TODAES), Applied Soft Computing, Microprocessors and Microsystems, IET Computers & Digital Techniques, and International Journal of Electronics. Dr. Chao Wang was the recipient of Youth Innovation Promotion Association, CAS, ACM China Rising Star Honorable Mention (2016), and best IP nomination of DATE 2015. He is now on the CCF Technical Committee on Computer Architecture, CCF Task Force on Formal Methods. He is a Senior Member of IEEE, Senior Member of CCF, and a Senior Member of ACM.
This book presents recent developments on the theoretical, algorithmic, and application aspects of Big Data in Complex and Social Networks. The book consists of four parts, covering a wide range of topics. The first part of the book focuses on data storage and data processing. It explores how the efficient storage of data can fundamentally support intensive data access and queries, which enables sophisticated analysis. It also looks at how data processing and visualization help to communicate information clearly and efficiently. The second part of the book is devoted to the extraction of essential information and the prediction of web content. The book shows how Big Data analysis can be used to understand the interests, location, and search history of users and provide more accurate predictions of User Behavior. The latter two parts of the book cover the protection of privacy and security, and emergent applications of big data and social networks. It analyzes how to model rumor diffusion, identify misinformation from massive data, and design intervention strategies. Applications of big data and social networks in multilayer networks and multiparty systems are also covered in-depth.
Emphasizing the impact of computer software and computational technology on econometric theory and development, this text presents recent advances in the application of computerized tools to econometric techniques and practicesaEURO"focusing on current innovations in Monte Carlo simulation, computer-aided testing, model selection, and Bayesian methodology for improved econometric analyses.
Model a Wide Range of Count Time Series Handbook of Discrete-Valued Time Series presents state-of-the-art methods for modeling time series of counts and incorporates frequentist and Bayesian approaches for discrete-valued spatio-temporal data and multivariate data. While the book focuses on time series of counts, some of the techniques discussed can be applied to other types of discrete-valued time series, such as binary-valued or categorical time series. Explore a Balanced Treatment of Frequentist and Bayesian Perspectives Accessible to graduate-level students who have taken an elementary class in statistical time series analysis, the book begins with the history and current methods for modeling and analyzing univariate count series. It next discusses diagnostics and applications before proceeding to binary and categorical time series. The book then provides a guide to modern methods for discrete-valued spatio-temporal data, illustrating how far modern applications have evolved from their roots. The book ends with a focus on multivariate and long-memory count series. Get Guidance from Masters in the Field Written by a cohesive group of distinguished contributors, this handbook provides a unified account of the diverse techniques available for observation- and parameter-driven models. It covers likelihood and approximate likelihood methods, estimating equations, simulation methods, and a Bayesian approach for model fitting.
Financial, Macro and Micro Econometrics Using R, Volume 42, provides state-of-the-art information on important topics in econometrics, including multivariate GARCH, stochastic frontiers, fractional responses, specification testing and model selection, exogeneity testing, causal analysis and forecasting, GMM models, asset bubbles and crises, corporate investments, classification, forecasting, nonstandard problems, cointegration, financial market jumps and co-jumps, among other topics.
This is an essential how-to guide on the application of structural equation modeling (SEM) techniques with the AMOS software, focusing on the practical applications of both simple and advanced topics. Written in an easy-to-understand conversational style, the book covers everything from data collection and screening to confirmatory factor analysis, structural model analysis, mediation, moderation, and more advanced topics such as mixture modeling, censored date, and non-recursive models. Through step-by-step instructions, screen shots, and suggested guidelines for reporting, Collier cuts through abstract definitional perspectives to give insight on how to actually run analysis. Unlike other SEM books, the examples used will often start in SPSS and then transition to AMOS so that the reader can have full confidence in running the analysis from beginning to end. Best practices are also included on topics like how to determine if your SEM model is formative or reflective, making it not just an explanation of SEM topics, but a guide for researchers on how to develop a strong methodology while studying their respective phenomenon of interest. With a focus on practical applications of both basic and advanced topics, and with detailed work-through examples throughout, this book is ideal for experienced researchers and beginners across the behavioral and social sciences.
This book covers recent advances in efficiency evaluations, most notably Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA) methods. It introduces the underlying theories, shows how to make the relevant calculations and discusses applications. The aim is to make the reader aware of the pros and cons of the different methods and to show how to use these methods in both standard and non-standard cases. Several software packages have been developed to solve some of the most common DEA and SFA models. This book relies on R, a free, open source software environment for statistical computing and graphics. This enables the reader to solve not only standard problems, but also many other problem variants. Using R, one can focus on understanding the context and developing a good model. One is not restricted to predefined model variants and to a one-size-fits-all approach. To facilitate the use of R, the authors have developed an R package called Benchmarking, which implements the main methods within both DEA and SFA. The book uses mathematical formulations of models and assumptions, but it de-emphasizes the formal proofs - in part by placing them in appendices -- or by referring to the original sources. Moreover, the book emphasizes the usage of the theories and the interpretations of the mathematical formulations. It includes a series of small examples, graphical illustrations, simple extensions and questions to think about. Also, it combines the formal models with less formal economic and organizational thinking. Last but not least it discusses some larger applications with significant practical impacts, including the design of benchmarking-based regulations of energy companies in different European countries, and the development of merger control programs for competition authorities.
This book brings together presentations of some of the fundamental new research that has begun to appear in the areas of dynamic structural modeling, nonlinear structural modeling, time series modeling, nonparametric inference, and chaotic attractor inference. The contents of this volume comprise the proceedings of the third of a conference series entitled International Symposia in Economic Theory and Econometrics. This conference was held at the IC;s2 (Innovation, Creativity and Capital) Institute at the University of Texas at Austin on May 22-23, l986.
This volume is dedicated to two recent intensive areas of research
in the econometrics of panel data, namely nonstationary panels and
dynamic panels. It includes a comprehensive survey of the
nonstationary panel literature including panel unit root tests,
spurious panel regressions and panel cointegration
An understanding of the behaviour of financial assets and the evolution of economies has never been as important as today. This book looks at these complex systems from the perspective of the physicist. So called 'econophysics' and its application to finance has made great strides in recent years. Less emphasis has been placed on the broader subject of macroeconomics and many economics students are still taught traditional neo-classical economics. The reader is given a general primer in statistical physics, probability theory, and use of correlation functions. Much of the mathematics that is developed is frequently no longer included in undergraduate physics courses. The statistical physics of Boltzmann and Gibbs is one of the oldest disciplines within physics and it can be argued that it was first applied to ensembles of molecules as opposed to being applied to social agents only by way of historical accident. The authors argue by analogy that the theory can be applied directly to economic systems comprising assemblies of interacting agents. The necessary tools and mathematics are developed in a clear and concise manner. The body of work, now termed econophysics, is then developed. The authors show where traditional methods break down and show how the probability distributions and correlation functions can be properly understood using high frequency data. Recent work by the physics community on risk and market crashes are discussed together with new work on betting markets as well as studies of speculative peaks that occur in housing markets. The second half of the book continues the empirical approach showing how by analogy with thermodynamics, a self-consistent attack can be made on macroeconomics. This leads naturally to economic production functions being equated to entropy functions - a new concept for economists. Issues relating to non-equilibrium naturally arise during the development and application of this approach to economics. These are discussed in the context of superstatistics and adiabatic processes. As a result it does seem ultimately possible to reconcile the approach with non-equilibrium systems, and the ideas are applied to study income and wealth distributions, which with their power law distribution functions have puzzled many researchers ever since Pareto discovered them over 100 years ago. This book takes a pedagogical approach to these topics and is aimed at final year undergraduate and beginning gradaute or post-graduate students in physics, economics, and business. However, the experienced researcher and quant should also find much of interest.
This book aims to bring together studies using different data types (panel data, cross-sectional data and time series data) and different methods (for example, panel regression, nonlinear time series, chaos approach, deep learning, machine learning techniques among others) and to create a source for those interested in these topics and methods by addressing some selected applied econometrics topics which have been developed in recent years. It creates a common meeting ground for scientists who give econometrics education in Turkey to study, and contribute to the delivery of the authors' knowledge to the people who take interest. This book can also be useful for "Applied Economics and Econometrics" courses in postgraduate education as a material source
The book aims at perfecting the national governance system and improving national governance ability. It evaluates the balance sheets of the state and residents, non-financial corporations, financial institutions and the central bank, the central government, local government and external sectors - the goal being to provide a systematic analysis of the characteristics and trajectory of China's economic expansion and structural adjustment, as well as objective assessments of short and long-term economic operations, debt risks and financial risks with regard to the institutional and structural characteristics of economic development in market-oriented reform. It puts forward a preliminary analysis of China's national and sectoral balance sheets on the basis of scientific estimates of various kinds of data, analyzes from a new perspective the major issues that are currently troubling China - development sustainability, government transformation, local government debt, welfare reform, and the financial opening-up and stability - and explores corresponding policies, measures, and institutional arrangements.
Quants, physicists working on Wall Street as quantitative analysts, have been widely blamed for triggering financial crises with their complex mathematical models. Their formulas were meant to allow Wall Street to prosper without risk. But in this penetrating insider's look at the recent economic collapse, Emanuel Derman--former head quant at Goldman Sachs--explains the collision between mathematical modeling and economics and what makes financial models so dangerous. Though such models imitate the style of physics and employ the language of mathematics, theories in physics aim for a description of reality--but in finance, models can shoot only for a very limited approximation of reality. Derman uses his firsthand experience in financial theory and practice to explain the complicated tangles that have paralyzed the economy. "Models.Behaving.Badly. "exposes Wall Street's love affair with models, and shows us why nobody will ever be able to write a model that can encapsulate human behavior.
The Handbook of U.S. Labor Statistics is recognized as an authoritative resource on the U.S. labor force. It continues and enhances the Bureau of Labor Statistics's (BLS) discontinued publication, Labor Statistics. It allows the user to understand recent developments as well as to compare today's economy with past history. This edition includes new tables on occupational safety and health and income in the United States. The Handbook is a comprehensive reference providing an abundance of data on a variety of topics including: *Employment and unemployment; *Earnings; *Prices; *Productivity; *Consumer expenditures; *Occupational safety and health; *Union membership; *Working poor *And much more! Features of the publication In addition to over 215 tables that present practical data, the Handbook provides: *Introductory material for each chapter that contains highlights of salient data and figures that call attention to noteworthy trends in the data *Notes and definitions, which contain concise descriptions of the data sources, concepts, definitions, and methodology from which the data are derived *References to more comprehensive reports which provide additional data and more extensive descriptions of estimation methods, sampling, and reliability measures
Over the last decade, dynamical systems theory and related
nonlinear methods have had a major impact on the analysis of time
series data from complex systems. Recent developments in
mathematical methods of state-space reconstruction, time-delay
embedding, and surrogate data analysis, coupled with readily
accessible and powerful computational facilities used in gathering
and processing massive quantities of high-frequency data, have
provided theorists and practitioners unparalleled opportunities for
exploratory data analysis, modelling, forecasting, and
control. |
You may like...
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
The Handbook of Historical Economics
Alberto Bisin, Giovanni Federico
Paperback
R2,567
Discovery Miles 25 670
Pricing Decisions in the Euro Area - How…
Silvia Fabiani, Claire Loupias, …
Hardcover
R2,160
Discovery Miles 21 600
Introductory Econometrics - A Modern…
Jeffrey Wooldridge
Hardcover
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
|