![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics
The composition of portfolios is one of the most fundamental and important methods in financial engineering, used to control the risk of investments. This book provides a comprehensive overview of statistical inference for portfolios and their various applications. A variety of asset processes are introduced, including non-Gaussian stationary processes, nonlinear processes, non-stationary processes, and the book provides a framework for statistical inference using local asymptotic normality (LAN). The approach is generalized for portfolio estimation, so that many important problems can be covered. This book can primarily be used as a reference by researchers from statistics, mathematics, finance, econometrics, and genomics. It can also be used as a textbook by senior undergraduate and graduate students in these fields.
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results. After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas, and discusses the Markov product for 2-copulas. The authors also examine selected families of copulas that possess appealing features from both theoretical and applied viewpoints. The book concludes with in-depth discussions on two generalizations of copulas: quasi- and semi-copulas. Although copulas are not the solution to all stochastic problems, they are an indispensable tool for understanding several problems about stochastic dependence. This book gives you the solid and formal mathematical background to apply copulas to a range of mathematical areas, such as probability, real analysis, measure theory, and algebraic structures.
Winner of the 2017 De Groot Prize awarded by the International Society for Bayesian Analysis (ISBA) A relatively new area of research, adversarial risk analysis (ARA) informs decision making when there are intelligent opponents and uncertain outcomes. Adversarial Risk Analysis develops methods for allocating defensive or offensive resources against intelligent adversaries. Many examples throughout illustrate the application of the ARA approach to a variety of games and strategic situations. Focuses on the recent subfield of decision analysis, ARA Compares ideas from decision theory and game theory Uses multi-agent influence diagrams (MAIDs) throughout to help readers visualize complex information structures Applies the ARA approach to simultaneous games, auctions, sequential games, and defend-attack games Contains an extended case study based on a real application in railway security, which provides a blueprint for how to perform ARA in similar security situations Includes exercises at the end of most chapters, with selected solutions at the back of the book The book shows decision makers how to build Bayesian models for the strategic calculation of their opponents, enabling decision makers to maximize their expected utility or minimize their expected loss. This new approach to risk analysis asserts that analysts should use Bayesian thinking to describe their beliefs about an opponent's goals, resources, optimism, and type of strategic calculation, such as minimax and level-k thinking. Within that framework, analysts then solve the problem from the perspective of the opponent while placing subjective probability distributions on all unknown quantities. This produces a distribution over the actions of the opponent and enables analysts to maximize their expected utilities.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
From the Foreword: "Big Data Management and Processing is [a] state-of-the-art book that deals with a wide range of topical themes in the field of Big Data. The book, which probes many issues related to this exciting and rapidly growing field, covers processing, management, analytics, and applications... [It] is a very valuable addition to the literature. It will serve as a source of up-to-date research in this continuously developing area. The book also provides an opportunity for researchers to explore the use of advanced computing technologies and their impact on enhancing our capabilities to conduct more sophisticated studies." ---Sartaj Sahni, University of Florida, USA "Big Data Management and Processing covers the latest Big Data research results in processing, analytics, management and applications. Both fundamental insights and representative applications are provided. This book is a timely and valuable resource for students, researchers and seasoned practitioners in Big Data fields. --Hai Jin, Huazhong University of Science and Technology, China Big Data Management and Processing explores a range of big data related issues and their impact on the design of new computing systems. The twenty-one chapters were carefully selected and feature contributions from several outstanding researchers. The book endeavors to strike a balance between theoretical and practical coverage of innovative problem solving techniques for a range of platforms. It serves as a repository of paradigms, technologies, and applications that target different facets of big data computing systems. The first part of the book explores energy and resource management issues, as well as legal compliance and quality management for Big Data. It covers In-Memory computing and In-Memory data grids, as well as co-scheduling for high performance computing applications. The second part of the book includes comprehensive coverage of Hadoop and Spark, along with security, privacy, and trust challenges and solutions. The latter part of the book covers mining and clustering in Big Data, and includes applications in genomics, hospital big data processing, and vehicular cloud computing. The book also analyzes funding for Big Data projects.
* A useful guide to financial product modeling and to minimizing business risk and uncertainty * Looks at wide range of financial assets and markets and correlates them with enterprises' profitability * Introduces advanced and novel machine learning techniques in finance such as Support Vector Machine, Neural Networks, Random Forest, K-Nearest Neighbors, Extreme Learning Machine, Deep Learning Approaches and applies them to analyze finance data sets * Real world applicable examples to further understanding
Risk Measures and Insurance Solvency Benchmarks: Fixed-Probability Levels in Renewal Risk Models is written for academics and practitioners who are concerned about potential weaknesses of the Solvency II regulatory system. It is also intended for readers who are interested in pure and applied probability, have a taste for classical and asymptotic analysis, and are motivated to delve into rather intensive calculations. The formal prerequisite for this book is a good background in analysis. The desired prerequisite is some degree of probability training, but someone with knowledge of the classical real-variable theory, including asymptotic methods, will also find this book interesting. For those who find the proofs too complicated, it may be reassuring that most results in this book are formulated in rather elementary terms. This book can also be used as reading material for basic courses in risk measures, insurance mathematics, and applied probability. The material of this book was partly used by the author for his courses in several universities in Moscow, Copenhagen University, and in the University of Montreal. Features Requires only minimal mathematical prerequisites in analysis and probability Suitable for researchers and postgraduate students in related fields Could be used as a supplement to courses in risk measures, insurance mathematics and applied probability.
This textbook concisely covers math knowledge and tools useful for business and economics studies, including matrix analysis, basic math concepts, general optimization, dynamic optimization, and ordinary differential equations. Basic math tools, particularly optimization tools, are essential for students in a business school, especially for students in economics, accounting, finance, management, and marketing. It is a standard practice nowadays that a graduate program in a business school requires a short and intense course in math just before or immediately after the students enter the program. Math in Economics aims to be the main textbook for such a crash course.The 1st edition was published by People's University Publisher, China. This new edition contains an added chapter on Probability Theory along with changes and improvements throughout.
This book presents selected peer-reviewed contributions from the International Conference on Time Series and Forecasting, ITISE 2018, held in Granada, Spain, on September 19-21, 2018. The first three parts of the book focus on the theory of time series analysis and forecasting, and discuss statistical methods, modern computational intelligence methodologies, econometric models, financial forecasting, and risk analysis. In turn, the last three parts are dedicated to applied topics and include papers on time series analysis in the earth sciences, energy time series forecasting, and time series analysis and prediction in other real-world problems. The book offers readers valuable insights into the different aspects of time series analysis and forecasting, allowing them to benefit both from its sophisticated and powerful theory, and from its practical applications, which address real-world problems in a range of disciplines. The ITISE conference series provides a valuable forum for scientists, engineers, educators and students to discuss the latest advances and implementations in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics.
This volume comprises the classic articles on methods of identification and estimation of simultaneous equations econometric models. It includes path-breaking contributions by Trygve Haavelmo and Tjalling Koopmans, who founded the subject and received Nobel prizes for their work. It presents original articles that developed and analysed the leading methods for estimating the parameters of simultaneous equations systems: instrumental variables, indirect least squares, generalized least squares, two-stage and three-stage least squares, and maximum likelihood. Many of the articles are not readily accessible to readers in any other form.
The beginning of the age of artificial intelligence and machine learning has created new challenges and opportunities for data analysts, statisticians, mathematicians, econometricians, computer scientists and many others. At the root of these techniques are algorithms and methods for clustering and classifying different types of large datasets, including time series data. Time Series Clustering and Classification includes relevant developments on observation-based, feature-based and model-based traditional and fuzzy clustering methods, feature-based and model-based classification methods, and machine learning methods. It presents a broad and self-contained overview of techniques for both researchers and students. Features Provides an overview of the methods and applications of pattern recognition of time series Covers a wide range of techniques, including unsupervised and supervised approaches Includes a range of real examples from medicine, finance, environmental science, and more R and MATLAB code, and relevant data sets are available on a supplementary website
"Prof. Nitis Mukhopadhyay and Prof. Partha Pratim Sengupta, who edited this volume with great attention and rigor, have certainly carried out noteworthy activities." - Giovanni Maria Giorgi, University of Rome (Sapienza) "This book is an important contribution to the development of indices of disparity and dissatisfaction in the age of globalization and social strife." - Shelemyahu Zacks, SUNY-Binghamton "It will not be an overstatement when I say that the famous income inequality index or wealth inequality index, which is most widely accepted across the globe is named after Corrado Gini (1984-1965). ... I take this opportunity to heartily applaud the two co-editors for spending their valuable time and energy in putting together a wonderful collection of papers written by the acclaimed researchers on selected topics of interest today. I am very impressed, and I believe so will be its readers." - K.V. Mardia, University of Leeds Gini coefficient or Gini index was originally defined as a standardized measure of statistical dispersion intended to understand an income distribution. It has evolved into quantifying inequity in all kinds of distributions of wealth, gender parity, access to education and health services, environmental policies, and numerous other attributes of importance. Gini Inequality Index: Methods and Applications features original high-quality peer-reviewed chapters prepared by internationally acclaimed researchers. They provide innovative methodologies whether quantitative or qualitative, covering welfare economics, development economics, optimization/non-optimization, econometrics, air quality, statistical learning, inference, sample size determination, big data science, and some heuristics. Never before has such a wide dimension of leading research inspired by Gini's works and their applicability been collected in one edited volume. The volume also showcases modern approaches to the research of a number of very talented and upcoming younger contributors and collaborators. This feature will give readers a window with a distinct view of what emerging research in this field may entail in the near future.
Originally published in 1971, this is a rigorous analysis of the economic aspects of the efficiency of public enterprises at the time. The author first restates and extends the relevant parts of welfare economics, and then illustrates its application to particular cases, drawing on the work of the National Board for Prices and Incomes, of which he was Deputy Chairman. The analysis is developed stage by stage, with the emphasis on applicability and ease of comprehension, rather than on generality or mathematical elegance. Financial performance, the second-best, the optimal degree of complexity of price structures and problems of optimal quality are first discussed in a static framework. Time is next introduced, leading to a marginal cost concept derived from a multi-period optimizing model. The analysis is then related to urban transport, shipping, gas and coal. This is likely to become a standard work of more general scope than the authors earlier book on electricity supply. It rests, however, on a similar combination of economic theory and high-level experience of the real problems of public enterprises.
The second edition of this widely acclaimed text presents a thoroughly up-to-date intuitive account of recent developments in econometrics. It continues to present the frontiers of research in an accessible form for non-specialist econometricians, advanced undergraduates and graduate students wishing to carry out applied econometric research. This new edition contains substantially revised chapters on cointegration and vector autoregressive (VAR) modelling, reflecting the developments that have been made in these important areas since the first edition. Special attention is given to the Dickey-Pantula approach and the testing for the order of integration of a variable in the presence of a structural break. For VAR models, impulse response analysis is explained and illustrated. There is also a detailed but intuitive explanation of the Johansen method, an increasingly popular technique. The text contains specially constructed and original tables of critical values for a wide range of tests for stationarity and cointegration. These tables are for Dickey-Fuller tests, Dickey-Hasza-Fuller and HEGY seasonal integration tests and the Perron 'additive outlier' integration test.
The recent financial crisis has heightened the need for appropriate methodologies for managing and monitoring complex risks in financial markets. The measurement, management, and regulation of risks in portfolios composed of credits, credit derivatives, or life insurance contracts is difficult because of the nonlinearities of risk models, dependencies between individual risks, and the several thousands of contracts in large portfolios. The granularity principle was introduced in the Basel regulations for credit risk to solve these difficulties in computing capital reserves. In this book, authors Patrick Gagliardini and Christian Gourieroux provide the first comprehensive overview of the granularity theory and illustrate its usefulness for a variety of problems related to risk analysis, statistical estimation, and derivative pricing in finance and insurance. They show how the granularity principle leads to analytical formulas for risk analysis that are simple to implement and accurate even when the portfolio size is large."
In many branches of science relevant observations are taken sequentially over time. Bayesian Analysis of Time Series discusses how to use models that explain the probabilistic characteristics of these time series and then utilizes the Bayesian approach to make inferences about their parameters. This is done by taking the prior information and via Bayes theorem implementing Bayesian inferences of estimation, testing hypotheses, and prediction. The methods are demonstrated using both R and WinBUGS. The R package is primarily used to generate observations from a given time series model, while the WinBUGS packages allows one to perform a posterior analysis that provides a way to determine the characteristic of the posterior distribution of the unknown parameters. Features Presents a comprehensive introduction to the Bayesian analysis of time series. Gives many examples over a wide variety of fields including biology, agriculture, business, economics, sociology, and astronomy. Contains numerous exercises at the end of each chapter many of which use R and WinBUGS. Can be used in graduate courses in statistics and biostatistics, but is also appropriate for researchers, practitioners and consulting statisticians. About the author Lyle D. Broemeling, Ph.D., is Director of Broemeling and Associates Inc., and is a consulting biostatistician. He has been involved with academic health science centers for about 20 years and has taught and been a consultant at the University of Texas Medical Branch in Galveston, The University of Texas MD Anderson Cancer Center and the University of Texas School of Public Health. His main interest is in developing Bayesian methods for use in medical and biological problems and in authoring textbooks in statistics. His previous books for Chapman & Hall/CRC include Bayesian Biostatistics and Diagnostic Medicine, and Bayesian Methods for Agreement.
This book provides an up-to-date series of advanced chapters on applied financial econometric techniques pertaining the various fields of commodities finance, mathematics & stochastics, international macroeconomics and financial econometrics. International Financial Markets: Volume I provides a key repository on the current state of knowledge, the latest debates and recent literature on international financial markets. Against the background of the "financialization of commodities" since the 2008 sub-primes crisis, section one contains recent contributions on commodity and financial markets, pushing the frontiers of applied econometrics techniques. The second section is devoted to exchange rate and current account dynamics in an environment characterized by large global imbalances. Part three examines the latest research in the field of meta-analysis in economics and finance. This book will be useful to students and researchers in applied econometrics; academics and students seeking convenient access to an unfamiliar area. It will also be of great interest established researchers seeking a single repository on the current state of knowledge, current debates and relevant literature.
This is a two-volume collection of major papers which have shaped the development of econometrics. Part I includes articles which together provide an overview of the history of econometrics, Part II addresses the relationship between econometrics and statistics, the articles in Part III constitute early applied studies, and Part IV includes articles concerned with the role and method of econometrics. The work comprises 42 articles, dating from 1921-1991, and contributors include E.W. Gilboy, W.C. Mitchell, J.J. Spengler, R. Stone, H.O. Wold and S. Wright.
Following the recent publication of the award winning and much acclaimed "The New Palgrave Dictionary of Economics," second edition which brings together Nobel Prize winners and the brightest young scholars to survey the discipline, we are pleased to announce "The New Palgrave Economics Collection." Due to demand from the economics community these books address key subject areas within the field. Each title is comprised of specially selected articles from the Dictionary and covers a fundamental theme within the discipline. All of the articles have been specifically chosen by the editors of the Dictionary, Steven N.Durlauf and Lawrence E.Blume and are written by leading practitioners in the field. The Collections provide the reader with easy to access information on complex and important subject areas, and allow individual scholars and students to have their own personal reference copy.
This book provides an up-to-date series of advanced chapters on applied financial econometric techniques pertaining the various fields of commodities finance, mathematics & stochastics, international macroeconomics and financial econometrics. Financial Mathematics, Volatility and Covariance Modelling: Volume 2 provides a key repository on the current state of knowledge, the latest debates and recent literature on financial mathematics, volatility and covariance modelling. The first section is devoted to mathematical finance, stochastic modelling and control optimization. Chapters explore the recent financial crisis, the increase of uncertainty and volatility, and propose an alternative approach to deal with these issues. The second section covers financial volatility and covariance modelling and explores proposals for dealing with recent developments in financial econometrics This book will be useful to students and researchers in applied econometrics; academics and students seeking convenient access to an unfamiliar area. It will also be of great interest established researchers seeking a single repository on the current state of knowledge, current debates and relevant literature.
This book provides in-depth analyses on accounting methods of GDP, statistic calibers and comparative perspectives on Chinese GDP. Beginning with an exploration of international comparisons of GDP, the book introduces the theoretical backgrounds, data sources, algorithms of the exchange rate method and the purchasing power parity method and discusses the advantages, disadvantages, and the latest developments in the two methods. This book further elaborates on the reasons for the imperfections of the Chinese GDP data including limitations of current statistical techniques and the accounting system, as well as the relatively confusing statistics for the service industry. The authors then make suggestions for improvement. Finally, the authors emphasize that evaluation of a country's economy and social development should not be solely limited to GDP, but should focus more on indicators of the comprehensive national power, national welfare, and the people's livelihood. This book will be of interest to economists, China-watchers, and scholars of geopolitics.
New statistical methods and future directions of research in time series A Course in Time Series Analysis demonstrates how to build time series models for univariate and multivariate time series data. It brings together material previously available only in the professional literature and presents a unified view of the most advanced procedures available for time series model building. The authors begin with basic concepts in univariate time series, providing an up-to-date presentation of ARIMA models, including the Kalman filter, outlier analysis, automatic methods for building ARIMA models, and signal extraction. They then move on to advanced topics, focusing on heteroscedastic models, nonlinear time series models, Bayesian time series analysis, nonparametric time series analysis, and neural networks. Multivariate time series coverage includes presentations on vector ARMA models, cointegration, and multivariate linear systems. Special features include:
Requiring no previous knowledge of the subject, A Course in Time Series Analysis is an important reference and a highly useful resource for researchers and practitioners in statistics, economics, business, engineering, and environmental analysis.
Measurement in Economics: a Handbook aims to serve as a source,
reference, and teaching supplement for quantitative empirical
economics, inside and outside the laboratory. Covering an extensive
range of fields in economics: econometrics, actuarial science,
experimental economics, index theory, national accounts, and
economic forecasting, it is the first book that takes measurement
in economics as its central focus. It shows how different and
sometimes distinct fields share the same kind of measurement
problems and so how the treatment of these problems in one field
can function as a guidance in other fields. This volume provides
comprehensive and up-to-date surveys of recent developments in
economic measurement, written at a level intended for professional
use by economists, econometricians, statisticians and social
scientists. |
![]() ![]() You may like...
Discrepancy of Signed Measures and…
Vladimir V. Andrievskii, Hans-Peter Blatt
Hardcover
R4,615
Discovery Miles 46 150
Arithmetic and Algebraic Circuits
Antonio Lloris Ruiz, Encarnacion Castillo Morales, …
Hardcover
R5,234
Discovery Miles 52 340
Convergence Estimates in Approximation…
Vijay Gupta, Ravi P. Agarwal
Hardcover
R2,933
Discovery Miles 29 330
The Schroedinger-Virasoro Algebra…
Jeremie Unterberger, Claude Roger
Hardcover
R1,566
Discovery Miles 15 660
Variational Theory of Splines
Anatoly Yu. Bezhaev, Vladimir A. Vasilenko
Hardcover
R3,132
Discovery Miles 31 320
Associative and Non-Associative Algebras…
Mercedes Siles Molina, Laiachi El Kaoutit, …
Hardcover
R2,921
Discovery Miles 29 210
Natural Locomotion in Fluids and on…
Stephen Childress, Anette Hosoi, …
Hardcover
R2,915
Discovery Miles 29 150
Operational Research - IO 2018, Aveiro…
Maria Joao Alves, Joao Paulo Almeida, …
Hardcover
R4,369
Discovery Miles 43 690
|