Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics > Economic statistics
The quantitative modeling of complex systems of interacting risks is a fairly recent development in the financial and insurance industries. Over the past decades, there has been tremendous innovation and development in the actuarial field. In addition to undertaking mortality and longevity risks in traditional life and annuity products, insurers face unprecedented financial risks since the introduction of equity-linking insurance in 1960s. As the industry moves into the new territory of managing many intertwined financial and insurance risks, non-traditional problems and challenges arise, presenting great opportunities for technology development. Today's computational power and technology make it possible for the life insurance industry to develop highly sophisticated models, which were impossible just a decade ago. Nonetheless, as more industrial practices and regulations move towards dependence on stochastic models, the demand for computational power continues to grow. While the industry continues to rely heavily on hardware innovations, trying to make brute force methods faster and more palatable, we are approaching a crossroads about how to proceed. An Introduction to Computational Risk Management of Equity-Linked Insurance provides a resource for students and entry-level professionals to understand the fundamentals of industrial modeling practice, but also to give a glimpse of software methodologies for modeling and computational efficiency. Features Provides a comprehensive and self-contained introduction to quantitative risk management of equity-linked insurance with exercises and programming samples Includes a collection of mathematical formulations of risk management problems presenting opportunities and challenges to applied mathematicians Summarizes state-of-arts computational techniques for risk management professionals Bridges the gap between the latest developments in finance and actuarial literature and the practice of risk management for investment-combined life insurance Gives a comprehensive review of both Monte Carlo simulation methods and non-simulation numerical methods Runhuan Feng is an Associate Professor of Mathematics and the Director of Actuarial Science at the University of Illinois at Urbana-Champaign. He is a Fellow of the Society of Actuaries and a Chartered Enterprise Risk Analyst. He is a Helen Corley Petit Professorial Scholar and the State Farm Companies Foundation Scholar in Actuarial Science. Runhuan received a Ph.D. degree in Actuarial Science from the University of Waterloo, Canada. Prior to joining Illinois, he held a tenure-track position at the University of Wisconsin-Milwaukee, where he was named a Research Fellow. Runhuan received numerous grants and research contracts from the Actuarial Foundation and the Society of Actuaries in the past. He has published a series of papers on top-tier actuarial and applied probability journals on stochastic analytic approaches in risk theory and quantitative risk management of equity-linked insurance. Over the recent years, he has dedicated his efforts to developing computational methods for managing market innovations in areas of investment combined insurance and retirement planning.
Sufficient dimension reduction is a rapidly developing research field that has wide applications in regression diagnostics, data visualization, machine learning, genomics, image processing, pattern recognition, and medicine, because they are fields that produce large datasets with a large number of variables. Sufficient Dimension Reduction: Methods and Applications with R introduces the basic theories and the main methodologies, provides practical and easy-to-use algorithms and computer codes to implement these methodologies, and surveys the recent advances at the frontiers of this field. Features Provides comprehensive coverage of this emerging research field. Synthesizes a wide variety of dimension reduction methods under a few unifying principles such as projection in Hilbert spaces, kernel mapping, and von Mises expansion. Reflects most recent advances such as nonlinear sufficient dimension reduction, dimension folding for tensorial data, as well as sufficient dimension reduction for functional data. Includes a set of computer codes written in R that are easily implemented by the readers. Uses real data sets available online to illustrate the usage and power of the described methods. Sufficient dimension reduction has undergone momentous development in recent years, partly due to the increased demands for techniques to process high-dimensional data, a hallmark of our age of Big Data. This book will serve as the perfect entry into the field for the beginning researchers or a handy reference for the advanced ones. The author Bing Li obtained his Ph.D. from the University of Chicago. He is currently a Professor of Statistics at the Pennsylvania State University. His research interests cover sufficient dimension reduction, statistical graphical models, functional data analysis, machine learning, estimating equations and quasilikelihood, and robust statistics. He is a fellow of the Institute of Mathematical Statistics and the American Statistical Association. He is an Associate Editor for The Annals of Statistics and the Journal of the American Statistical Association.
A fair question to ask of an advocate of subjective Bayesianism (which the author is) is "how would you model uncertainty?" In this book, the author writes about how he has done it using real problems from the past, and offers additional comments about the context in which he was working.
Proven Methods for Big Data Analysis As big data has become standard in many application areas, challenges have arisen related to methodology and software development, including how to discover meaningful patterns in the vast amounts of data. Addressing these problems, Applied Biclustering Methods for Big and High-Dimensional Data Using R shows how to apply biclustering methods to find local patterns in a big data matrix. The book presents an overview of data analysis using biclustering methods from a practical point of view. Real case studies in drug discovery, genetics, marketing research, biology, toxicity, and sports illustrate the use of several biclustering methods. References to technical details of the methods are provided for readers who wish to investigate the full theoretical background. All the methods are accompanied with R examples that show how to conduct the analyses. The examples, software, and other materials are available on a supplementary website.
Estimate and Interpret Results from Ordered Regression Models Ordered Regression Models: Parallel, Partial, and Non-Parallel Alternatives presents regression models for ordinal outcomes, which are variables that have ordered categories but unknown spacing between the categories. The book provides comprehensive coverage of the three major classes of ordered regression models (cumulative, stage, and adjacent) as well as variations based on the application of the parallel regression assumption. The authors first introduce the three "parallel" ordered regression models before covering unconstrained partial, constrained partial, and nonparallel models. They then review existing tests for the parallel regression assumption, propose new variations of several tests, and discuss important practical concerns related to tests of the parallel regression assumption. The book also describes extensions of ordered regression models, including heterogeneous choice models, multilevel ordered models, and the Bayesian approach to ordered regression models. Some chapters include brief examples using Stata and R. This book offers a conceptual framework for understanding ordered regression models based on the probability of interest and the application of the parallel regression assumption. It demonstrates the usefulness of numerous modeling alternatives, showing you how to select the most appropriate model given the type of ordinal outcome and restrictiveness of the parallel assumption for each variable. Web ResourceMore detailed examples are available on a supplementary website. The site also contains JAGS, R, and Stata codes to estimate the models along with syntax to reproduce the results.
Economic evaluation has become an essential component of clinical trial design to show that new treatments and technologies offer value to payers in various healthcare systems. Although many books exist that address the theoretical or practical aspects of cost-effectiveness analysis, this book differentiates itself from the competition by detailing how to apply health economic evaluation techniques in a clinical trial context, from both academic and pharmaceutical/commercial perspectives. It also includes a special chapter for clinical trials in Cancer. Design & Analysis of Clinical Trials for Economic Evaluation & Reimbursement is not just about performing cost-effectiveness analyses. It also emphasizes the strategic importance of economic evaluation and offers guidance and advice on the complex factors at play before, during, and after an economic evaluation. Filled with detailed examples, the book bridges the gap between applications of economic evaluation in industry (mainly pharmaceutical) and what students may learn in university courses. It provides readers with access to SAS and STATA code. In addition, Windows-based software for sample size and value of information analysis is available free of charge-making it a valuable resource for students considering a career in this field or for those who simply wish to know more about applying economic evaluation techniques. The book includes coverage of trial design, case report form design, quality of life measures, sample sizes, submissions to regulatory authorities for reimbursement, Markov models, cohort models, and decision trees. Examples and case studies are provided at the end of each chapter. Presenting first-hand insights into how economic evaluations are performed from a drug development perspective, the book supplies readers with the foundation required to succeed in an environment where clinical trials and cost-effectiveness of new treatments are central. It also includes thought-provoking exercises for use in classroom and seminar discussions.
Since the publication of the first edition over 30 years ago, the literature related to Pareto distributions has flourished to encompass computer-based inference methods. Pareto Distributions, Second Edition provides broad, up-to-date coverage of the Pareto model and its extensions. This edition expands several chapters to accommodate recent results and reflect the increased use of more computer-intensive inference procedures. New to the Second Edition New material on multivariate inequality Recent ways of handling the problems of inference for Pareto models and their generalizations and extensions New discussions of bivariate and multivariate income and survival models This book continues to provide researchers with a useful resource for understanding the statistical aspects of Pareto and Pareto-like distributions. It covers income models and properties of Pareto distributions, measures of inequality for studying income distributions, inference procedures for Pareto distributions, and various multivariate Pareto distributions existing in the literature.
Over the last thirty years there has been extensive use of continuous time econometric methods in macroeconomic modelling. This monograph presents the first continuous time macroeconometric model of the United Kingdom incorporating stochastic trends. Its development represents a major step forward in continuous time macroeconomic modelling. The book describes the new model in detail and, like earlier models, it is designed in such a way as to permit a rigorous mathematical analysis of its steady-state and stability properties, thus providing a valuable check on the capacity of the model to generate plausible long-run behaviour. The model is estimated using newly developed exact Gaussian estimation methods for continuous time econometric models incorporating unobservable stochastic trends. The book also includes discussion of the application of the model to dynamic analysis and forecasting.
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
Thijs ten Raa, author of the acclaimed text The Economics of Input-Output Analysis, now takes the reader to the forefront of the field. This volume collects and unifies his and his co-authors' research papers on national accounting, input-output coefficients, economic theory, dynamic models, stochastic analysis, and performance analysis. The research is driven by the task to analyze national economies. The final part of the book scrutinizes the emerging Asian economies in the light of international competition.
This essential reference for students and scholars in the input-output research and applications community has been fully revised and updated to reflect important developments in the field. Expanded coverage includes construction and application of multiregional and interregional models, including international models and their application to global economic issues such as climate change and international trade; structural decomposition and path analysis; linkages and key sector identification and hypothetical extraction analysis; the connection of national income and product accounts to input-output accounts; supply and use tables for commodity-by-industry accounting and models; social accounting matrices; non-survey estimation techniques; and energy and environmental applications. Input-Output Analysis is an ideal introduction to the subject for advanced undergraduate and graduate students in many scholarly fields, including economics, regional science, regional economics, city, regional and urban planning, environmental planning, public policy analysis and public management.
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics. The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models. All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.
Focusing on Bayesian approaches and computations using analytic and simulation-based methods for inference, Time Series: Modeling, Computation, and Inference, Second Edition integrates mainstream approaches for time series modeling with significant recent developments in methodology and applications of time series analysis. It encompasses a graduate-level account of Bayesian time series modeling, analysis and forecasting, a broad range of references to state-of-the-art approaches to univariate and multivariate time series analysis, and contacts research frontiers in multivariate time series modeling and forecasting. It presents overviews of several classes of models and related methodology for inference, statistical computation for model fitting and assessment, and forecasting. It explores the connections between time- and frequency-domain approaches and develop various models and analyses using Bayesian formulations and computation, including use of computations based on Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods. It illustrates the models and methods with examples and case studies from a variety of fields, including signal processing, biomedicine, environmental science, and finance. Along with core models and methods, the book represents state-of-the art approaches to analysis and forecasting in challenging time series problems. It also demonstrates the growth of time series analysis into new application areas in recent years, and contacts recent and relevant modeling developments and research challenges. New in the second edition: Expanded on aspects of core model theory and methodology. Multiple new examples and exercises. Detailed development of dynamic factor models. Updated discussion and connections with recent and current research frontiers.
Quantitative Modeling of Derivative Securities demonstrates how to take the basic ideas of arbitrage theory and apply them - in a very concrete way - to the design and analysis of financial products. Based primarily (but not exclusively) on the analysis of derivatives, the book emphasizes relative-value and hedging ideas applied to different financial instruments. Using a "financial engineering approach," the theory is developed progressively, focusing on specific aspects of pricing and hedging and with problems that the technical analyst or trader has to consider in practice. More than just an introductory text, the reader who has mastered the contents of this one book will have breached the gap separating the novice from the technical and research literature.
The state-space approach provides a formal framework where any result or procedure developed for a basic model can be seamlessly applied to a standard formulation written in state-space form. Moreover, it can accommodate with a reasonable effort nonstandard situations, such as observation errors, aggregation constraints, or missing in-sample values. Exploring the advantages of this approach, State-Space Methods for Time Series Analysis: Theory, Applications and Software presents many computational procedures that can be applied to a previously specified linear model in state-space form. After discussing the formulation of the state-space model, the book illustrates the flexibility of the state-space representation and covers the main state estimation algorithms: filtering and smoothing. It then shows how to compute the Gaussian likelihood for unknown coefficients in the state-space matrices of a given model before introducing subspace methods and their application. It also discusses signal extraction, describes two algorithms to obtain the VARMAX matrices corresponding to any linear state-space model, and addresses several issues relating to the aggregation and disaggregation of time series. The book concludes with a cross-sectional extension to the classical state-space formulation in order to accommodate longitudinal or panel data. Missing data is a common occurrence here, and the book explains imputation procedures necessary to treat missingness in both exogenous and endogenous variables. Web Resource The authors' E4 MATLAB (R) toolbox offers all the computational procedures, administrative and analytical functions, and related materials for time series analysis. This flexible, powerful, and free software tool enables readers to replicate the practical examples in the text and apply the procedures to their own work.
There is no book currently available that gives a comprehensive treatment of the design, construction, and use of index numbers. However, there is a pressing need for one in view of the increasing and more sophisticated employment of index numbers in the whole range of applied economics and specifically in discussions of macroeconomic policy. In this book, R. G. D. Allen meets this need in simple and consistent terms and with comprehensive coverage. The text begins with an elementary survey of the index-number problem before turning to more detailed treatments of the theory and practice of index numbers. The binary case in which one time period is compared with another is first developed and illustrated with numerous examples. This is to prepare the ground for the central part of the text on runs of index numbers. Particular attention is paid both to fixed-weighted and to chain forms as used in a wide range of published index numbers taken mainly from British official sources. This work deals with some further problems in the construction of index numbers, problems which are both troublesome and largely unresolved. These include the use of sampling techniques in index-number design and the theoretical and practical treatment of quality changes. It is also devoted to a number of detailed and specific applications of index-number techniques to problems ranging from national-income accounting, through the measurement of inequality of incomes and international comparisons of real incomes, to the use of index numbers of stock-market prices. Aimed primarily at students of economics, whatever their age and range of interests, this work will also be of use to those who handle index numbers professionally. "R. G. D. Allen" (1906-1983) was Professor Emeritus at the University of London. He was also once president of the Royal Statistical Society and Treasurer of the British Academy where he was a fellow. He is the author of "Basic Mathematics," "Mathematical Analysis for Economists," "Mathematical Economics" and "Macroeconomic Theory."
The first part of this book discusses institutions and mechanisms of algorithmic trading, market microstructure, high-frequency data and stylized facts, time and event aggregation, order book dynamics, trading strategies and algorithms, transaction costs, market impact and execution strategies, risk analysis, and management. The second part covers market impact models, network models, multi-asset trading, machine learning techniques, and nonlinear filtering. The third part discusses electronic market making, liquidity, systemic risk, recent developments and debates on the subject.
First published in 1995. In the current, increasingly global economy, investors require quick access to a wide range of financial and investment-related statistics to assist them in better understanding the macroeconomic environment in which their investments will operate. The International Financial Statistics Locator eliminates the need to search though a number of sources to identify those that contain much of this statistical information. It is intended for use by librarians, students, individual investors, and the business community and provides access to twenty-two resources, print and electronic, that contain current and historical financial and economic statistics investors need to appreciate and profit from evolving and established international markets.
For courses in Business Statistics. A classic text for accuracy and statistical precision Statistics for Business and Economics enables students to conduct serious analysis of applied problems rather than running simple "canned" applications. This text is also at a mathematically higher level than most business statistics texts and provides students with the knowledge they need to become stronger analysts for future managerial positions. In this regard, it emphasises an understanding of the assumptions that are necessary for professional analysis. In particular, it has greatly expanded the number of applications that utilise data from applied policy and research settings. The 9th Edition of this book has been revised and updated to provide students with improved problem contexts for learning how statistical methods can improve their analysis and understanding of business and economics. This revision recognises the globalisation of statistical study and in particular the global market for this book.
Collecting and analyzing data on unemployment, inflation, and inequality help describe the complex world around us. When published by the government, such data are called official statistics. They are reported by the media, used by politicians to lend weight to their arguments, and by economic commentators to opine about the state of society. Despite such widescale use, explanations about how these measures are constructed are seldom provided for a non-technical reader. This Measuring Society book is a short, accessible guide to six topics: jobs, house prices, inequality, prices for goods and services, poverty, and deprivation. Each relates to concepts we use on a personal level to form an understanding of the society in which we live: We need a job, a place to live, and food to eat. Using data from the United States, we answer three basic questions: why, how, and for whom these statistics have been constructed. We add some context and flavor by discussing the historical background. This book provides the reader with a good grasp of these measures. Chaitra H. Nagaraja is an Associate Professor of Statistics at the Gabelli School of Business at Fordham University in New York. Her research interests include house price indices and inequality measurement. Prior to Fordham, Dr. Nagaraja was a researcher at the U.S. Census Bureau. While there, she worked on projects relating to the American Community Survey.
Originally published in 1970; with a second edition in 1989. Empirical Bayes methods use some of the apparatus of the pure Bayes approach, but an actual prior distribution is assumed to generate the data sequence. It can be estimated thus producing empirical Bayes estimates or decision rules. In this second edition, details are provided of the derivation and the performance of empirical Bayes rules for a variety of special models. Attention is given to the problem of assessing the goodness of an empirical Bayes estimator for a given set of prior data. Chapters also focus on alternatives to the empirical Bayes approach and actual applications of empirical Bayes methods.
Originally published in 1984. This book brings together a reasonably complete set of results regarding the use of Constraint Item estimation procedures under the assumption of accurate specification. The analysis covers the case of all explanatory variables being non-stochastic as well as the case of identified simultaneous equations, with error terms known and unknown. Particular emphasis is given to the derivation of criteria for choosing the Constraint Item. Part 1 looks at the best CI estimators and Part 2 examines equation by equation estimation, considering forecasting accuracy.
Originally published in 1960 and 1966. This is an elementary introduction to the sources of economic statistics and their uses in answering economic questions. No mathematical knowledge is assumed, and no mathematical symbols are used. The book shows - by asking and answering a number of typical questions of applied economics - what the most useful statistics are, where they are found, and how they are to be interpreted and presented. The reader is introduced to the major British, European and American official sources, to the social accounts, to index numbers and averaging, and to elementary aids to inspection such as moving averages and scatter diagrams.
Originally published in 1929. This balanced combination of fieldwork, statistical measurement, and realistic applications shows a synthesis of economics and political science in a conception of an organic relationship between the two sciences that involves functional analysis, institutional interpretation, and a more workmanlike approach to questions of organization such as division of labour and the control of industry. The treatise applies the test of fact through statistical analysis to economic and political theories for the quantitative and institutional approach in solving social and industrial problems. It constructs a framework of concepts, combining both economic and political theory, to systematically produce an original statement in general terms of the principles and methods for statistical fieldwork. The separation into Parts allows selective reading for the methods of statistical measurement; the principles and fallacies of applying these measures to economic and political fields; and the resultant construction of a statistical economics and politics. Basic statistical concepts are described for application, with each method of statistical measurement illustrated with instances relevant to the economic and political theory discussed and a statistical glossary is included.
This book contains the most complete set of the Chinese national income and its components based on system of national accounts. It points out some fundamental issues concerning the estimation of China's national income and it is intended to the students of the field of China study around the world. |
You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Statistics for Business & Economics…
James McClave, P Benson, …
Paperback
R2,304
Discovery Miles 23 040
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
R2,178
Discovery Miles 21 780
|