![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
Winner of the 2017 De Groot Prize awarded by the International Society for Bayesian Analysis (ISBA) A relatively new area of research, adversarial risk analysis (ARA) informs decision making when there are intelligent opponents and uncertain outcomes. Adversarial Risk Analysis develops methods for allocating defensive or offensive resources against intelligent adversaries. Many examples throughout illustrate the application of the ARA approach to a variety of games and strategic situations. Focuses on the recent subfield of decision analysis, ARA Compares ideas from decision theory and game theory Uses multi-agent influence diagrams (MAIDs) throughout to help readers visualize complex information structures Applies the ARA approach to simultaneous games, auctions, sequential games, and defend-attack games Contains an extended case study based on a real application in railway security, which provides a blueprint for how to perform ARA in similar security situations Includes exercises at the end of most chapters, with selected solutions at the back of the book The book shows decision makers how to build Bayesian models for the strategic calculation of their opponents, enabling decision makers to maximize their expected utility or minimize their expected loss. This new approach to risk analysis asserts that analysts should use Bayesian thinking to describe their beliefs about an opponent's goals, resources, optimism, and type of strategic calculation, such as minimax and level-k thinking. Within that framework, analysts then solve the problem from the perspective of the opponent while placing subjective probability distributions on all unknown quantities. This produces a distribution over the actions of the opponent and enables analysts to maximize their expected utilities.
Develop the analytical skills that are in high demand in businesses today with Camm/Cochran/Fry/Ohlmann's best-selling BUSINESS ANALYTICS, 5E. You master the full range of analytics as you strengthen descriptive, predictive and prescriptive analytic skills. Real examples and memorable visuals clearly illustrate data and results. Step-by-step instructions guide you through using Excel, Tableau, R or the Python-based Orange data mining software to perform advanced analytics. Practical, relevant problems at all levels of difficulty let you apply what you've learned. Updates throughout this edition address topics beyond traditional quantitative concepts, such as data wrangling, data visualization and data mining, which are increasingly important in today's business environment. MindTap and WebAssign online learning platforms are also available with an interactive eBook, algorithmic practice problems and Exploring Analytics visualizations to strengthen your understanding of key concepts.
From the Foreword: "Big Data Management and Processing is [a] state-of-the-art book that deals with a wide range of topical themes in the field of Big Data. The book, which probes many issues related to this exciting and rapidly growing field, covers processing, management, analytics, and applications... [It] is a very valuable addition to the literature. It will serve as a source of up-to-date research in this continuously developing area. The book also provides an opportunity for researchers to explore the use of advanced computing technologies and their impact on enhancing our capabilities to conduct more sophisticated studies." ---Sartaj Sahni, University of Florida, USA "Big Data Management and Processing covers the latest Big Data research results in processing, analytics, management and applications. Both fundamental insights and representative applications are provided. This book is a timely and valuable resource for students, researchers and seasoned practitioners in Big Data fields. --Hai Jin, Huazhong University of Science and Technology, China Big Data Management and Processing explores a range of big data related issues and their impact on the design of new computing systems. The twenty-one chapters were carefully selected and feature contributions from several outstanding researchers. The book endeavors to strike a balance between theoretical and practical coverage of innovative problem solving techniques for a range of platforms. It serves as a repository of paradigms, technologies, and applications that target different facets of big data computing systems. The first part of the book explores energy and resource management issues, as well as legal compliance and quality management for Big Data. It covers In-Memory computing and In-Memory data grids, as well as co-scheduling for high performance computing applications. The second part of the book includes comprehensive coverage of Hadoop and Spark, along with security, privacy, and trust challenges and solutions. The latter part of the book covers mining and clustering in Big Data, and includes applications in genomics, hospital big data processing, and vehicular cloud computing. The book also analyzes funding for Big Data projects.
A number of methodologies have been employed to provide decision making solutions to a whole assortment of financial problems in today's globalized markets. Hidden Markov Models in Finance by Mamon and Elliott will be the first systematic application of these methods to some special kinds of financial problems; namely, pricing options and variance swaps, valuation of life insurance policies, interest rate theory, credit risk modeling, risk management, analysis of future demand and inventory level, testing foreign exchange rate hypothesis, and early warning systems for currency crises. This book provides researchers and practitioners with analyses that allow them to sort through the random noise of financial markets (i.e., turbulence, volatility, emotion, chaotic events, etc.) and analyze the fundamental components of economic markets. Hence, Hidden Markov Models in Finance provides decision makers with a clear, accurate picture of core financial components by filtering out the random noise in financial markets.
Solve the DVA/FVA Overlap Issue and Effectively Manage Portfolio Credit Risk Counterparty Risk and Funding: A Tale of Two Puzzles explains how to study risk embedded in financial transactions between the bank and its counterparty. The authors provide an analytical basis for the quantitative methodology of dynamic valuation, mitigation, and hedging of bilateral counterparty risk on over-the-counter (OTC) derivative contracts under funding constraints. They explore credit, debt, funding, liquidity, and rating valuation adjustment (CVA, DVA, FVA, LVA, and RVA) as well as replacement cost (RC), wrong-way risk, multiple funding curves, and collateral. The first part of the book assesses today's financial landscape, including the current multi-curve reality of financial markets. In mathematical but model-free terms, the second part describes all the basic elements of the pricing and hedging framework. Taking a more practical slant, the third part introduces a reduced-form modeling approach in which the risk of default of the two parties only shows up through their default intensities. The fourth part addresses counterparty risk on credit derivatives through dynamic copula models. In the fifth part, the authors present a credit migrations model that allows you to account for rating-dependent credit support annex (CSA) clauses. They also touch on nonlinear FVA computations in credit portfolio models. The final part covers classical tools from stochastic analysis and gives a brief introduction to the theory of Markov copulas. The credit crisis and ongoing European sovereign debt crisis have shown the importance of the proper assessment and management of counterparty risk. This book focuses on the interaction and possible overlap between DVA and FVA terms. It also explores the particularly challenging issue of counterparty risk in portfolio credit modeling. Primarily for researchers and graduate students in financial mathematics, the book is also suitable for financial quants, managers in banks, CVA desks, and members of supervisory bodies.
Statistics for Finance develops students' professional skills in statistics with applications in finance. Developed from the authors' courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Ito's formula, the Black-Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues. In addition, end-of-chapter exercises develop students' financial reasoning skills.
The proliferation of financial derivatives over the past decades, options in particular, has underscored the increasing importance of derivative pricing literacy among students, researchers, and practitioners. Derivative Pricing: A Problem-Based Primer demystifies the essential derivative pricing theory by adopting a mathematically rigorous yet widely accessible pedagogical approach that will appeal to a wide variety of audience. Abandoning the traditional "black-box" approach or theorists' "pedantic" approach, this textbook provides readers with a solid understanding of the fundamental mechanism of derivative pricing methodologies and their underlying theory through a diversity of illustrative examples. The abundance of exercises and problems makes the book well-suited as a text for advanced undergraduates, beginning graduates as well as a reference for professionals and researchers who need a thorough understanding of not only "how," but also "why" derivative pricing works. It is especially ideal for students who need to prepare for the derivatives portion of the Society of Actuaries Investment and Financial Markets Exam. Features Lucid explanations of the theory and assumptions behind various derivative pricing models. Emphasis on intuitions, mnemonics as well as common fallacies. Interspersed with illustrative examples and end-of-chapter problems that aid a deep understanding of concepts in derivative pricing. Mathematical derivations, while not eschewed, are made maximally accessible. A solutions manual is available for qualified instructors. The Author Ambrose Lo is currently Assistant Professor of Actuarial Science at the Department of Statistics and Actuarial Science at the University of Iowa. He received his Ph.D. in Actuarial Science from the University of Hong Kong in 2014, with dependence structures, risk measures, and optimal reinsurance being his research interests. He is a Fellow of the Society of Actuaries (FSA) and a Chartered Enterprise Risk Analyst (CERA). His research papers have been published in top-tier actuarial journals, such as ASTIN Bulletin: The Journal of the International Actuarial Association, Insurance: Mathematics and Economics, and Scandinavian Actuarial Journal.
This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. In particular, the GAMLSS statistical framework enables flexible regression and smoothing models to be fitted to the data. The GAMLSS model assumes that the response variable has any parametric (continuous, discrete or mixed) distribution which might be heavy- or light-tailed, and positively or negatively skewed. In addition, all the parameters of the distribution (location, scale, shape) can be modelled as linear or smooth functions of explanatory variables. Key Features: Provides a broad overview of flexible regression and smoothing techniques to learn from data whilst also focusing on the practical application of methodology using GAMLSS software in R. Includes a comprehensive collection of real data examples, which reflect the range of problems addressed by GAMLSS models and provide a practical illustration of the process of using flexible GAMLSS models for statistical learning. R code integrated into the text for ease of understanding and replication. Supplemented by a website with code, data and extra materials. This book aims to help readers understand how to learn from data encountered in many fields. It will be useful for practitioners and researchers who wish to understand and use the GAMLSS models to learn from data and also for students who wish to learn GAMLSS through practical examples.
Oil and gas industries apply several techniques for assessing and mitigating the risks that are inherent in its operations. In this context, the application of Bayesian Networks (BNs) to risk assessment offers a different probabilistic version of causal reasoning. Introducing probabilistic nature of hazards, conditional probability and Bayesian thinking, it discusses how cause and effect of process hazards can be modelled using BNs and development of large BNs from basic building blocks. Focus is on development of BNs for typical equipment in industry including accident case studies and its usage along with other conventional risk assessment methods. Aimed at professionals in oil and gas industry, safety engineering, risk assessment, this book Brings together basics of Bayesian theory, Bayesian Networks and applications of the same to process safety hazards and risk assessment in the oil and gas industry Presents sequence of steps for setting up the model, populating the model with data and simulating the model for practical cases in a systematic manner Includes a comprehensive list on sources of failure data and tips on modelling and simulation of large and complex networks Presents modelling and simulation of loss of containment of actual equipment in oil and gas industry such as Separator, Storage tanks, Pipeline, Compressor and risk assessments Discusses case studies to demonstrate the practicability of use of Bayesian Network in routine risk assessments
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data. The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational material for the remaining chapters, which cover the construction of structural models and the extension of vector autoregressive modeling to high frequency, continuously recorded, and irregularly sampled series. The final chapter combines these approaches with spectral methods for identifying causal dependence between time series. Web ResourceA supplementary website provides the data sets used in the examples as well as documented MATLAB (R) functions and other code for analyzing the examples and producing the illustrations. The site also offers technical details on the estimation theory and methods and the implementation of the models.
The state-space approach provides a formal framework where any result or procedure developed for a basic model can be seamlessly applied to a standard formulation written in state-space form. Moreover, it can accommodate with a reasonable effort nonstandard situations, such as observation errors, aggregation constraints, or missing in-sample values. Exploring the advantages of this approach, State-Space Methods for Time Series Analysis: Theory, Applications and Software presents many computational procedures that can be applied to a previously specified linear model in state-space form. After discussing the formulation of the state-space model, the book illustrates the flexibility of the state-space representation and covers the main state estimation algorithms: filtering and smoothing. It then shows how to compute the Gaussian likelihood for unknown coefficients in the state-space matrices of a given model before introducing subspace methods and their application. It also discusses signal extraction, describes two algorithms to obtain the VARMAX matrices corresponding to any linear state-space model, and addresses several issues relating to the aggregation and disaggregation of time series. The book concludes with a cross-sectional extension to the classical state-space formulation in order to accommodate longitudinal or panel data. Missing data is a common occurrence here, and the book explains imputation procedures necessary to treat missingness in both exogenous and endogenous variables. Web Resource The authors' E4 MATLAB (R) toolbox offers all the computational procedures, administrative and analytical functions, and related materials for time series analysis. This flexible, powerful, and free software tool enables readers to replicate the practical examples in the text and apply the procedures to their own work.
Model a Wide Range of Count Time Series Handbook of Discrete-Valued Time Series presents state-of-the-art methods for modeling time series of counts and incorporates frequentist and Bayesian approaches for discrete-valued spatio-temporal data and multivariate data. While the book focuses on time series of counts, some of the techniques discussed can be applied to other types of discrete-valued time series, such as binary-valued or categorical time series. Explore a Balanced Treatment of Frequentist and Bayesian Perspectives Accessible to graduate-level students who have taken an elementary class in statistical time series analysis, the book begins with the history and current methods for modeling and analyzing univariate count series. It next discusses diagnostics and applications before proceeding to binary and categorical time series. The book then provides a guide to modern methods for discrete-valued spatio-temporal data, illustrating how far modern applications have evolved from their roots. The book ends with a focus on multivariate and long-memory count series. Get Guidance from Masters in the Field Written by a cohesive group of distinguished contributors, this handbook provides a unified account of the diverse techniques available for observation- and parameter-driven models. It covers likelihood and approximate likelihood methods, estimating equations, simulation methods, and a Bayesian approach for model fitting.
Mastering the basic concepts of mathematics is the key to understanding other subjects such as Economics, Finance, Statistics, and Accounting. Mathematics for Finance, Business and Economics is written informally for easy comprehension. Unlike traditional textbooks it provides a combination of explanations, exploration and real-life applications of major concepts. Mathematics for Finance, Business and Economics discusses elementary mathematical operations, linear and non-linear functions and equations, differentiation and optimization, economic functions, summation, percentages and interest, arithmetic and geometric series, present and future values of annuities, matrices and Markov chains. Aided by the discussion of real-world problems and solutions, students across the business and economics disciplines will find this textbook perfect for gaining an understanding of a core plank of their studies.
Financial, Macro and Micro Econometrics Using R, Volume 42, provides state-of-the-art information on important topics in econometrics, including multivariate GARCH, stochastic frontiers, fractional responses, specification testing and model selection, exogeneity testing, causal analysis and forecasting, GMM models, asset bubbles and crises, corporate investments, classification, forecasting, nonstandard problems, cointegration, financial market jumps and co-jumps, among other topics.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
High-Performance Computing for Big Data: Methodologies and Applications explores emerging high-performance architectures for data-intensive applications, novel efficient analytical strategies to boost data processing, and cutting-edge applications in diverse fields, such as machine learning, life science, neural networks, and neuromorphic engineering. The book is organized into two main sections. The first section covers Big Data architectures, including cloud computing systems, and heterogeneous accelerators. It also covers emerging 3D IC design principles for memory architectures and devices. The second section of the book illustrates emerging and practical applications of Big Data across several domains, including bioinformatics, deep learning, and neuromorphic engineering. Features Covers a wide range of Big Data architectures, including distributed systems like Hadoop/Spark Includes accelerator-based approaches for big data applications such as GPU-based acceleration techniques, and hardware acceleration such as FPGA/CGRA/ASICs Presents emerging memory architectures and devices such as NVM, STT- RAM, 3D IC design principles Describes advanced algorithms for different big data application domains Illustrates novel analytics techniques for Big Data applications, scheduling, mapping, and partitioning methodologies Featuring contributions from leading experts, this book presents state-of-the-art research on the methodologies and applications of high-performance computing for big data applications. About the Editor Dr. Chao Wang is an Associate Professor in the School of Computer Science at the University of Science and Technology of China. He is the Associate Editor of ACM Transactions on Design Automations for Electronics Systems (TODAES), Applied Soft Computing, Microprocessors and Microsystems, IET Computers & Digital Techniques, and International Journal of Electronics. Dr. Chao Wang was the recipient of Youth Innovation Promotion Association, CAS, ACM China Rising Star Honorable Mention (2016), and best IP nomination of DATE 2015. He is now on the CCF Technical Committee on Computer Architecture, CCF Task Force on Formal Methods. He is a Senior Member of IEEE, Senior Member of CCF, and a Senior Member of ACM.
This book presents recent developments on the theoretical, algorithmic, and application aspects of Big Data in Complex and Social Networks. The book consists of four parts, covering a wide range of topics. The first part of the book focuses on data storage and data processing. It explores how the efficient storage of data can fundamentally support intensive data access and queries, which enables sophisticated analysis. It also looks at how data processing and visualization help to communicate information clearly and efficiently. The second part of the book is devoted to the extraction of essential information and the prediction of web content. The book shows how Big Data analysis can be used to understand the interests, location, and search history of users and provide more accurate predictions of User Behavior. The latter two parts of the book cover the protection of privacy and security, and emergent applications of big data and social networks. It analyzes how to model rumor diffusion, identify misinformation from massive data, and design intervention strategies. Applications of big data and social networks in multilayer networks and multiparty systems are also covered in-depth.
This work examines theoretical issues, as well as practical developments in statistical inference related to econometric models and analysis. This work offers discussions on such areas as the function of statistics in aggregation, income inequality, poverty, health, spatial econometrics, panel and survey data, bootstrapping and time series.
This textbook discusses central statistical concepts and their use in business and economics. To endure the hardship of abstract statistical thinking, business and economics students need to see interesting applications at an early stage. Accordingly, the book predominantly focuses on exercises, several of which draw on simple applications of non-linear theory. The main body presents central ideas in a simple, straightforward manner; the exposition is concise, without sacrificing rigor. The book bridges the gap between theory and applications, with most exercises formulated in an economic context. Its simplicity of style makes the book suitable for students at any level, and every chapter starts out with simple problems. Several exercises, however, are more challenging, as they are devoted to the discussion of non-trivial economic problems where statistics plays a central part.
This book aims to bring together studies using different data types (panel data, cross-sectional data and time series data) and different methods (for example, panel regression, nonlinear time series, chaos approach, deep learning, machine learning techniques among others) and to create a source for those interested in these topics and methods by addressing some selected applied econometrics topics which have been developed in recent years. It creates a common meeting ground for scientists who give econometrics education in Turkey to study, and contribute to the delivery of the authors' knowledge to the people who take interest. This book can also be useful for "Applied Economics and Econometrics" courses in postgraduate education as a material source |
You may like...
Fat Chance - Probability from 0 to 1
Benedict Gross, Joe Harris, …
Hardcover
R1,923
Discovery Miles 19 230
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
|