![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Co-integration, equilibrium and equilibrium correction are key
concepts in modern applications of econometrics to real world
problems. This book provides direction and guidance to the now vast
literature facing students and graduate economists. Econometric
theory is linked to practical issues such as how to identify
equilibrium relationships, how to deal with structural breaks
associated with regime changes and what to do when variables are of
different orders of integration.
Data Stewardship for Open Science: Implementing FAIR Principles has been written with the intention of making scientists, funders, and innovators in all disciplines and stages of their professional activities broadly aware of the need, complexity, and challenges associated with open science, modern science communication, and data stewardship. The FAIR principles are used as a guide throughout the text, and this book should leave experimentalists consciously incompetent about data stewardship and motivated to respect data stewards as representatives of a new profession, while possibly motivating others to consider a career in the field. The ebook, avalable for no additional cost when you buy the paperback, will be updated every 6 months on average (providing that significant updates are needed or avaialble). Readers will have the opportunity to contribute material towards these updates, and to develop their own data management plans, via the free Data Stewardship Wizard.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
Introduction to Statistical Decision Theory: Utility Theory and Causal Analysis provides the theoretical background to approach decision theory from a statistical perspective. It covers both traditional approaches, in terms of value theory and expected utility theory, and recent developments, in terms of causal inference. The book is specifically designed to appeal to students and researchers that intend to acquire a knowledge of statistical science based on decision theory. Features Covers approaches for making decisions under certainty, risk, and uncertainty Illustrates expected utility theory and its extensions Describes approaches to elicit the utility function Reviews classical and Bayesian approaches to statistical inference based on decision theory Discusses the role of causal analysis in statistical decision theory
The advent of "Big Data" has brought with it a rapid diversification of data sources, requiring analysis that accounts for the fact that these data have often been generated and recorded for different reasons. Data integration involves combining data residing in different sources to enable statistical inference, or to generate new statistical data for purposes that cannot be served by each source on its own. This can yield significant gains for scientific as well as commercial investigations. However, valid analysis of such data should allow for the additional uncertainty due to entity ambiguity, whenever it is not possible to state with certainty that the integrated source is the target population of interest. Analysis of Integrated Data aims to provide a solid theoretical basis for this statistical analysis in three generic settings of entity ambiguity: statistical analysis of linked datasets that may contain linkage errors; datasets created by a data fusion process, where joint statistical information is simulated using the information in marginal data from non-overlapping sources; and estimation of target population size when target units are either partially or erroneously covered in each source. Covers a range of topics under an overarching perspective of data integration. Focuses on statistical uncertainty and inference issues arising from entity ambiguity. Features state of the art methods for analysis of integrated data. Identifies the important themes that will define future research and teaching in the statistical analysis of integrated data. Analysis of Integrated Data is aimed primarily at researchers and methodologists interested in statistical methods for data from multiple sources, with a focus on data analysts in the social sciences, and in the public and private sectors.
The beginning of the age of artificial intelligence and machine learning has created new challenges and opportunities for data analysts, statisticians, mathematicians, econometricians, computer scientists and many others. At the root of these techniques are algorithms and methods for clustering and classifying different types of large datasets, including time series data. Time Series Clustering and Classification includes relevant developments on observation-based, feature-based and model-based traditional and fuzzy clustering methods, feature-based and model-based classification methods, and machine learning methods. It presents a broad and self-contained overview of techniques for both researchers and students. Features Provides an overview of the methods and applications of pattern recognition of time series Covers a wide range of techniques, including unsupervised and supervised approaches Includes a range of real examples from medicine, finance, environmental science, and more R and MATLAB code, and relevant data sets are available on a supplementary website
The composition of portfolios is one of the most fundamental and important methods in financial engineering, used to control the risk of investments. This book provides a comprehensive overview of statistical inference for portfolios and their various applications. A variety of asset processes are introduced, including non-Gaussian stationary processes, nonlinear processes, non-stationary processes, and the book provides a framework for statistical inference using local asymptotic normality (LAN). The approach is generalized for portfolio estimation, so that many important problems can be covered. This book can primarily be used as a reference by researchers from statistics, mathematics, finance, econometrics, and genomics. It can also be used as a textbook by senior undergraduate and graduate students in these fields.
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results. After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas, and discusses the Markov product for 2-copulas. The authors also examine selected families of copulas that possess appealing features from both theoretical and applied viewpoints. The book concludes with in-depth discussions on two generalizations of copulas: quasi- and semi-copulas. Although copulas are not the solution to all stochastic problems, they are an indispensable tool for understanding several problems about stochastic dependence. This book gives you the solid and formal mathematical background to apply copulas to a range of mathematical areas, such as probability, real analysis, measure theory, and algebraic structures.
Winner of the 2017 De Groot Prize awarded by the International Society for Bayesian Analysis (ISBA) A relatively new area of research, adversarial risk analysis (ARA) informs decision making when there are intelligent opponents and uncertain outcomes. Adversarial Risk Analysis develops methods for allocating defensive or offensive resources against intelligent adversaries. Many examples throughout illustrate the application of the ARA approach to a variety of games and strategic situations. Focuses on the recent subfield of decision analysis, ARA Compares ideas from decision theory and game theory Uses multi-agent influence diagrams (MAIDs) throughout to help readers visualize complex information structures Applies the ARA approach to simultaneous games, auctions, sequential games, and defend-attack games Contains an extended case study based on a real application in railway security, which provides a blueprint for how to perform ARA in similar security situations Includes exercises at the end of most chapters, with selected solutions at the back of the book The book shows decision makers how to build Bayesian models for the strategic calculation of their opponents, enabling decision makers to maximize their expected utility or minimize their expected loss. This new approach to risk analysis asserts that analysts should use Bayesian thinking to describe their beliefs about an opponent's goals, resources, optimism, and type of strategic calculation, such as minimax and level-k thinking. Within that framework, analysts then solve the problem from the perspective of the opponent while placing subjective probability distributions on all unknown quantities. This produces a distribution over the actions of the opponent and enables analysts to maximize their expected utilities.
The book has been tested and refined through years of classroom teaching experience. With an abundance of examples, problems, and fully worked out solutions, the text introduces the financial theory and relevant mathematical methods in a mathematically rigorous yet engaging way. This textbook provides complete coverage of discrete-time financial models that form the cornerstones of financial derivative pricing theory. Unlike similar texts in the field, this one presents multiple problem-solving approaches, linking related comprehensive techniques for pricing different types of financial derivatives. Key features: In-depth coverage of discrete-time theory and methodology. Numerous, fully worked out examples and exercises in every chapter. Mathematically rigorous and consistent yet bridging various basic and more advanced concepts. Judicious balance of financial theory, mathematical, and computational methods. Guide to Material. This revision contains: Almost 200 pages worth of new material in all chapters. A new chapter on elementary probability theory. An expanded the set of solved problems and additional exercises. Answers to all exercises. This book is a comprehensive, self-contained, and unified treatment of the main theory and application of mathematical methods behind modern-day financial mathematics. Table of Contents List of Figures and Tables Preface I Introduction to Pricing and Management of Financial Securities 1 Mathematics of Compounding 2 Primer on Pricing Risky Securities 3 Portfolio Management 4 Primer on Derivative Securities II Discrete-Time Modelling 5 Single-Period Arrow-Debreu Models 6 Introduction to Discrete-Time Stochastic Calculus 7 Replication and Pricing in the Binomial Tree Model 8 General Multi-Asset Multi-Period Model Appendices A Elementary Probability Theory B Glossary of Symbols and Abbreviations C Answers and Hints to Exercises References Index Biographies Giuseppe Campolieti is Professor of Mathematics at Wilfrid Laurier University in Waterloo, Canada. He has been Natural Sciences and Engineering Research Council postdoctoral research fellow and university research fellow at the University of Toronto. In 1998, he joined the Masters in Mathematical Finance as an instructor and later as an adjunct professor in financial mathematics until 2002. Dr. Campolieti also founded a financial software and consulting company in 1998. He joined Laurier in 2002 as Associate Professor of Mathematics and as SHARCNET Chair in Financial Mathematics. Roman N. Makarov is Associate Professor and Chair of Mathematics at Wilfrid Laurier University. Prior to joining Laurier in 2003, he was an Assistant Professor of Mathematics at Siberian State University of Telecommunications and Informatics and a senior research fellow at the Laboratory of Monte Carlo Methods at the Institute of Computational Mathematics and Mathematical Geophysics in Novosibirsk, Russia.
For courses in Econometrics. A Clear, Practical Introduction to Econometrics Using Econometrics: A Practical Guide offers students an innovative introduction to elementary econometrics. Through real-world examples and exercises, the book covers the topic of single-equation linear regression analysis in an easily understandable format. The Seventh Edition is appropriate for all levels: beginner econometric students, regression users seeking a refresher, and experienced practitioners who want a convenient reference. Praised as one of the most important texts in the last 30 years, the book retains its clarity and practicality in previous editions with a number of substantial improvements throughout.
* A useful guide to financial product modeling and to minimizing business risk and uncertainty * Looks at wide range of financial assets and markets and correlates them with enterprises' profitability * Introduces advanced and novel machine learning techniques in finance such as Support Vector Machine, Neural Networks, Random Forest, K-Nearest Neighbors, Extreme Learning Machine, Deep Learning Approaches and applies them to analyze finance data sets * Real world applicable examples to further understanding
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
Law and economics research has had an enormous impact on the laws of contracts, torts, property, crimes, corporations, and antitrust, as well as public regulation and fundamental rights. The Law and Economics of Patent Damages, Antitrust, and Legal Process examines several areas of important research by a variety of international scholars. It contains technical papers on the appropriate way to estimate damages in patent disputes, as well as methods for evaluating relevant markets and vertically integrated firms when determining the competitive effects of mergers and other actions. There are also papers on the implication of different legal processes, regulations, and liability rules on consumer welfare, which range from the impact of delays in legal decisions in labour cases in France to issues of criminal liability related to the use of artificial intelligence. This volume of Research in Law and Economics is a must-read for researchers and professionals of patent damages, antitrust, labour, and legal process.
This book scientifically tests the assertion that accommodative monetary policy can eliminate the "crowd out" problem, allowing fiscal stimulus programs (such as tax cuts or increased government spending) to stimulate the economy as intended. It also tests to see if natural growth in th economy can cure the crowd out problem as well or better. The book is intended to be the largest scale scientific test ever performed on this topic. It includes about 800 separate statistical tests on the U.S. economy testing different parts or all of the period 1960 - 2010. These tests focus on whether accommodative monetary policy, which increases the pool of loanable resources, can offset the crowd out problem as well as natural growth in the economy. The book, employing the best scientific methods available to economists for this type of problem, concludes accommodate monetary policy could have, but until the quantitative easing program, Federal Reserve efforts to accommodate fiscal stimulus programs were not large enough to offset more than 23% to 44% of any one year's crowd out problem. That provides the science part of the answer as to why accommodative monetary policy didn't accommodate: too little of it was tried. The book also tests whether other increases in loanable funds, occurring because of natural growth in the economy or changes in the savings rate can also offset crowd out. It concludes they can, and that these changes tend to be several times as effective as accommodative monetary policy. This book's companion volume Why Fiscal Stimulus Programs Fail explores the policy implications of these results.
Financial econometrics combines mathematical and statistical theory and techniques to understand and solve problems in financial economics. Modeling and forecasting financial time series, such as prices, returns, interest rates, financial ratios, and defaults, are important parts of this field. In Financial Econometrics, you'll be introduced to this growing discipline and the concepts associated with it--from background material on probability theory and statistics to information regarding the properties of specific models and their estimation procedures. With this book as your guide, you'll become familiar with: Autoregressive conditional heteroskedasticity (ARCH) and GARCH modeling Principal components analysis (PCA) and factor analysis Stable processes and ARMA and GARCH models with fat-tailed errors Robust estimation methods Vector autoregressive and cointegrated processes, including advanced estimation methods for cointegrated systems And much more The experienced author team of Svetlozar Rachev, Stefan Mittnik, Frank Fabozzi, Sergio Focardi, and Teo Jasic not only presents you with an abundant amount of information on financial econometrics, but they also walk you through a wide array of examples to solidify your understanding of the issues discussed. Filled with in-depth insights and expert advice, Financial Econometrics provides comprehensive coverage of this discipline and clear explanations of how the models associated with it fit into today's investment management process.
From the Foreword: "Big Data Management and Processing is [a] state-of-the-art book that deals with a wide range of topical themes in the field of Big Data. The book, which probes many issues related to this exciting and rapidly growing field, covers processing, management, analytics, and applications... [It] is a very valuable addition to the literature. It will serve as a source of up-to-date research in this continuously developing area. The book also provides an opportunity for researchers to explore the use of advanced computing technologies and their impact on enhancing our capabilities to conduct more sophisticated studies." ---Sartaj Sahni, University of Florida, USA "Big Data Management and Processing covers the latest Big Data research results in processing, analytics, management and applications. Both fundamental insights and representative applications are provided. This book is a timely and valuable resource for students, researchers and seasoned practitioners in Big Data fields. --Hai Jin, Huazhong University of Science and Technology, China Big Data Management and Processing explores a range of big data related issues and their impact on the design of new computing systems. The twenty-one chapters were carefully selected and feature contributions from several outstanding researchers. The book endeavors to strike a balance between theoretical and practical coverage of innovative problem solving techniques for a range of platforms. It serves as a repository of paradigms, technologies, and applications that target different facets of big data computing systems. The first part of the book explores energy and resource management issues, as well as legal compliance and quality management for Big Data. It covers In-Memory computing and In-Memory data grids, as well as co-scheduling for high performance computing applications. The second part of the book includes comprehensive coverage of Hadoop and Spark, along with security, privacy, and trust challenges and solutions. The latter part of the book covers mining and clustering in Big Data, and includes applications in genomics, hospital big data processing, and vehicular cloud computing. The book also analyzes funding for Big Data projects.
"Prof. Nitis Mukhopadhyay and Prof. Partha Pratim Sengupta, who edited this volume with great attention and rigor, have certainly carried out noteworthy activities." - Giovanni Maria Giorgi, University of Rome (Sapienza) "This book is an important contribution to the development of indices of disparity and dissatisfaction in the age of globalization and social strife." - Shelemyahu Zacks, SUNY-Binghamton "It will not be an overstatement when I say that the famous income inequality index or wealth inequality index, which is most widely accepted across the globe is named after Corrado Gini (1984-1965). ... I take this opportunity to heartily applaud the two co-editors for spending their valuable time and energy in putting together a wonderful collection of papers written by the acclaimed researchers on selected topics of interest today. I am very impressed, and I believe so will be its readers." - K.V. Mardia, University of Leeds Gini coefficient or Gini index was originally defined as a standardized measure of statistical dispersion intended to understand an income distribution. It has evolved into quantifying inequity in all kinds of distributions of wealth, gender parity, access to education and health services, environmental policies, and numerous other attributes of importance. Gini Inequality Index: Methods and Applications features original high-quality peer-reviewed chapters prepared by internationally acclaimed researchers. They provide innovative methodologies whether quantitative or qualitative, covering welfare economics, development economics, optimization/non-optimization, econometrics, air quality, statistical learning, inference, sample size determination, big data science, and some heuristics. Never before has such a wide dimension of leading research inspired by Gini's works and their applicability been collected in one edited volume. The volume also showcases modern approaches to the research of a number of very talented and upcoming younger contributors and collaborators. This feature will give readers a window with a distinct view of what emerging research in this field may entail in the near future.
Complex dynamics constitute a growing and increasingly important area as they offer a strong potential to explain and formalize natural, physical, financial and economic phenomena. This book pursues the ambitious goal to bring together an extensive body of knowledge regarding complex dynamics from various academic disciplines. Beyond its focus on economics and finance, including for instance the evolution of macroeconomic growth models towards nonlinear structures as well as signal processing applications to stock markets, fundamental parts of the book are devoted to the use of nonlinear dynamics in mathematics, statistics, signal theory and processing. Numerous examples and applications, almost 700 illustrations and numerical simulations based on the use of Matlab make the book an essential reference for researchers and students from many different disciplines who are interested in the nonlinear field. An appendix recapitulates the basic mathematical concepts required to use the book.
This two-volume work aims to present as completely as possible the methods of statistical inference with special reference to their economic applications. It is a well-integrated textbook presenting a wide diversity of models in a coherent and unified framework. The reader will find a description not only of the classical concepts and results of mathematical statistics, but also of concepts and methods recently developed for the specific needs of econometrics. Although the two volumes do not demand a high level of mathematical knowledge, they do draw on linear algebra and probability theory. The breadth of approaches and the extensive coverage of this two-volume work provide for a thorough and entirely self-contained course in modern economics. Volume 1 provides an introduction to general concepts and methods in statistics and econometrics, and goes on to cover estimation and prediction. Volume 2 focuses on testing, confidence regions, model selection, and asymptotic theory.
Originally published in 1971, this is a rigorous analysis of the economic aspects of the efficiency of public enterprises at the time. The author first restates and extends the relevant parts of welfare economics, and then illustrates its application to particular cases, drawing on the work of the National Board for Prices and Incomes, of which he was Deputy Chairman. The analysis is developed stage by stage, with the emphasis on applicability and ease of comprehension, rather than on generality or mathematical elegance. Financial performance, the second-best, the optimal degree of complexity of price structures and problems of optimal quality are first discussed in a static framework. Time is next introduced, leading to a marginal cost concept derived from a multi-period optimizing model. The analysis is then related to urban transport, shipping, gas and coal. This is likely to become a standard work of more general scope than the authors earlier book on electricity supply. It rests, however, on a similar combination of economic theory and high-level experience of the real problems of public enterprises.
In this compelling 1995 book, David Hendry and Mary Morgan bring together the classic papers of the pioneer econometricians. Together, these papers form the foundations of econometric thought. They are essential reading for anyone seeking to understand the aims, method and methodology of econometrics and the development of this statistical approach in economics. However, because they are technically straightforward, the book is also accessible to students and non-specialists. An editorial commentary places the readings in their historical context and indicates the continuing relevance of these early, yet highly sophisticated, works for current econometric analysis. While this book provides a companion volume to Mary Morgan's acclaimed The History of Econometric Ideas, the editors' commentary both adds to that earlier volume and also provides a stand-alone and synthetic account of the development of econometrics.
This book provides an up-to-date series of advanced chapters on applied financial econometric techniques pertaining the various fields of commodities finance, mathematics & stochastics, international macroeconomics and financial econometrics. International Financial Markets: Volume I provides a key repository on the current state of knowledge, the latest debates and recent literature on international financial markets. Against the background of the "financialization of commodities" since the 2008 sub-primes crisis, section one contains recent contributions on commodity and financial markets, pushing the frontiers of applied econometrics techniques. The second section is devoted to exchange rate and current account dynamics in an environment characterized by large global imbalances. Part three examines the latest research in the field of meta-analysis in economics and finance. This book will be useful to students and researchers in applied econometrics; academics and students seeking convenient access to an unfamiliar area. It will also be of great interest established researchers seeking a single repository on the current state of knowledge, current debates and relevant literature.
This book provides an up-to-date series of advanced chapters on applied financial econometric techniques pertaining the various fields of commodities finance, mathematics & stochastics, international macroeconomics and financial econometrics. Financial Mathematics, Volatility and Covariance Modelling: Volume 2 provides a key repository on the current state of knowledge, the latest debates and recent literature on financial mathematics, volatility and covariance modelling. The first section is devoted to mathematical finance, stochastic modelling and control optimization. Chapters explore the recent financial crisis, the increase of uncertainty and volatility, and propose an alternative approach to deal with these issues. The second section covers financial volatility and covariance modelling and explores proposals for dealing with recent developments in financial econometrics This book will be useful to students and researchers in applied econometrics; academics and students seeking convenient access to an unfamiliar area. It will also be of great interest established researchers seeking a single repository on the current state of knowledge, current debates and relevant literature.
This book provides a detailed introduction to the theoretical and methodological foundations of production efficiency analysis using benchmarking. Two of the more popular methods of efficiency evaluation are Stochastic Frontier Analysis (SFA) and Data Envelopment Analysis (DEA), both of which are based on the concept of a production possibility set and its frontier. Depending on the assumed objectives of the decision-making unit, a Production, Cost, or Profit Frontier is constructed from observed data on input and output quantities and prices. While SFA uses different maximum likelihood estimation techniques to estimate a parametric frontier, DEA relies on mathematical programming to create a nonparametric frontier. Yet another alternative is the Convex Nonparametric Frontier, which is based on the assumed convexity of the production possibility set and creates a piecewise linear frontier consisting of a number of tangent hyper planes. Three of the papers in this volume provide a detailed and relatively easy to follow exposition of the underlying theory from neoclassical production economics and offer step-by-step instructions on the appropriate model to apply in different contexts and how to implement them. Of particular appeal are the instructions on (i) how to write the codes for different SFA models on STATA, (ii) how to write a VBA Macro for repetitive solution of the DEA problem for each production unit on Excel Solver, and (iii) how to write the codes for the Nonparametric Convex Frontier estimation. The three other papers in the volume are primarily theoretical and will be of interest to PhD students and researchers hoping to make methodological and conceptual contributions to the field of nonparametric efficiency analysis. |
You may like...
How to Give Effective Feedback to Your…
Susan M. Brookhart
Paperback
|