![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics
Extreme Value Modeling and Risk Analysis: Methods and Applications presents a broad overview of statistical modeling of extreme events along with the most recent methodologies and various applications. The book brings together background material and advanced topics, eliminating the need to sort through the massive amount of literature on the subject. After reviewing univariate extreme value analysis and multivariate extremes, the book explains univariate extreme value mixture modeling, threshold selection in extreme value analysis, and threshold modeling of non-stationary extremes. It presents new results for block-maxima of vine copulas, develops time series of extremes with applications from climatology, describes max-autoregressive and moving maxima models for extremes, and discusses spatial extremes and max-stable processes. The book then covers simulation and conditional simulation of max-stable processes; inference methodologies, such as composite likelihood, Bayesian inference, and approximate Bayesian computation; and inferences about extreme quantiles and extreme dependence. It also explores novel applications of extreme value modeling, including financial investments, insurance and financial risk management, weather and climate disasters, clinical trials, and sports statistics. Risk analyses related to extreme events require the combined expertise of statisticians and domain experts in climatology, hydrology, finance, insurance, sports, and other fields. This book connects statistical/mathematical research with critical decision and risk assessment/management applications to stimulate more collaboration between these statisticians and specialists.
Economic evaluation has become an essential component of clinical trial design to show that new treatments and technologies offer value to payers in various healthcare systems. Although many books exist that address the theoretical or practical aspects of cost-effectiveness analysis, this book differentiates itself from the competition by detailing how to apply health economic evaluation techniques in a clinical trial context, from both academic and pharmaceutical/commercial perspectives. It also includes a special chapter for clinical trials in Cancer. Design & Analysis of Clinical Trials for Economic Evaluation & Reimbursement is not just about performing cost-effectiveness analyses. It also emphasizes the strategic importance of economic evaluation and offers guidance and advice on the complex factors at play before, during, and after an economic evaluation. Filled with detailed examples, the book bridges the gap between applications of economic evaluation in industry (mainly pharmaceutical) and what students may learn in university courses. It provides readers with access to SAS and STATA code. In addition, Windows-based software for sample size and value of information analysis is available free of charge-making it a valuable resource for students considering a career in this field or for those who simply wish to know more about applying economic evaluation techniques. The book includes coverage of trial design, case report form design, quality of life measures, sample sizes, submissions to regulatory authorities for reimbursement, Markov models, cohort models, and decision trees. Examples and case studies are provided at the end of each chapter. Presenting first-hand insights into how economic evaluations are performed from a drug development perspective, the book supplies readers with the foundation required to succeed in an environment where clinical trials and cost-effectiveness of new treatments are central. It also includes thought-provoking exercises for use in classroom and seminar discussions.
A fair question to ask of an advocate of subjective Bayesianism (which the author is) is "how would you model uncertainty?" In this book, the author writes about how he has done it using real problems from the past, and offers additional comments about the context in which he was working.
Proven Methods for Big Data Analysis As big data has become standard in many application areas, challenges have arisen related to methodology and software development, including how to discover meaningful patterns in the vast amounts of data. Addressing these problems, Applied Biclustering Methods for Big and High-Dimensional Data Using R shows how to apply biclustering methods to find local patterns in a big data matrix. The book presents an overview of data analysis using biclustering methods from a practical point of view. Real case studies in drug discovery, genetics, marketing research, biology, toxicity, and sports illustrate the use of several biclustering methods. References to technical details of the methods are provided for readers who wish to investigate the full theoretical background. All the methods are accompanied with R examples that show how to conduct the analyses. The examples, software, and other materials are available on a supplementary website.
This book explores how econometric modelling can be used to provide valuable insight into international housing markets. Initially describing the role of econometrics modelling in real estate market research and how it has developed in recent years, the book goes on to compare and contrast the impact of various macroeconomic factors on developed and developing housing markets. Explaining the similarities and differences in the impact of financial crises on housing markets around the world, the author's econometric analysis of housing markets across the world provides a broad and nuanced perspective on the impact of both international financial markets and local macro economy on housing markets. With discussion of countries such as China, Germany, UK, US and South Africa, the lessons learned will be of interest to scholars of Real Estate economics around the world.
This book contains the most complete set of the Chinese national income and its components based on system of national accounts. It points out some fundamental issues concerning the estimation of China's national income and it is intended to the students of the field of China study around the world.
Pathwise estimation and inference for diffusion market models discusses contemporary techniques for inferring, from options and bond prices, the market participants' aggregate view on important financial parameters such as implied volatility, discount rate, future interest rate, and their uncertainty thereof. The focus is on the pathwise inference methods that are applicable to a sole path of the observed prices and do not require the observation of an ensemble of such paths. This book is pitched at the level of senior undergraduate students undertaking research at honors year, and postgraduate candidates undertaking Master's or PhD degree by research. From a research perspective, this book reaches out to academic researchers from backgrounds as diverse as mathematics and probability, econometrics and statistics, and computational mathematics and optimization whose interest lie in analysis and modelling of financial market data from a multi-disciplinary approach. Additionally, this book is also aimed at financial market practitioners participating in capital market facing businesses who seek to keep abreast with and draw inspiration from novel approaches in market data analysis. The first two chapters of the book contains introductory material on stochastic analysis and the classical diffusion stock market models. The remaining chapters discuss more special stock and bond market models and special methods of pathwise inference for market parameter for different models. The final chapter describes applications of numerical methods of inference of bond market parameters to forecasting of short rate. Nikolai Dokuchaev is an associate professor in Mathematics and Statistics at Curtin University. His research interests include mathematical and statistical finance, stochastic analysis, PDEs, control, and signal processing. Lin Yee Hin is a practitioner in the capital market facing industry. His research interests include econometrics, non-parametric regression, and scientific computing.
Sufficient dimension reduction is a rapidly developing research field that has wide applications in regression diagnostics, data visualization, machine learning, genomics, image processing, pattern recognition, and medicine, because they are fields that produce large datasets with a large number of variables. Sufficient Dimension Reduction: Methods and Applications with R introduces the basic theories and the main methodologies, provides practical and easy-to-use algorithms and computer codes to implement these methodologies, and surveys the recent advances at the frontiers of this field. Features Provides comprehensive coverage of this emerging research field. Synthesizes a wide variety of dimension reduction methods under a few unifying principles such as projection in Hilbert spaces, kernel mapping, and von Mises expansion. Reflects most recent advances such as nonlinear sufficient dimension reduction, dimension folding for tensorial data, as well as sufficient dimension reduction for functional data. Includes a set of computer codes written in R that are easily implemented by the readers. Uses real data sets available online to illustrate the usage and power of the described methods. Sufficient dimension reduction has undergone momentous development in recent years, partly due to the increased demands for techniques to process high-dimensional data, a hallmark of our age of Big Data. This book will serve as the perfect entry into the field for the beginning researchers or a handy reference for the advanced ones. The author Bing Li obtained his Ph.D. from the University of Chicago. He is currently a Professor of Statistics at the Pennsylvania State University. His research interests cover sufficient dimension reduction, statistical graphical models, functional data analysis, machine learning, estimating equations and quasilikelihood, and robust statistics. He is a fellow of the Institute of Mathematical Statistics and the American Statistical Association. He is an Associate Editor for The Annals of Statistics and the Journal of the American Statistical Association.
Originally published in 1951, this volume reprints the classic work
written by one of the leading global econometricians.
Practically all donor countries that give aid claim to do so on the
basis on the recipient's good governance, but do these claims have
a real impact on the allocation of aid? Are democratic, human
rights-respecting, countries with low levels of corruption and
military expenditures actually likely to receive more aid than
other countries?
In recent years econometricians have examined the problems of diagnostic testing, specification testing, semiparametric estimation and model selection. In addition researchers have considered whether to use model testing and model selection procedures to decide the models that best fit a particular dataset. This book explores both issues with application to various regression models, including the arbitrage pricing theory models. It is ideal as a reference for statistical sciences postgraduate students, academic researchers and policy makers in understanding the current status of model building and testing techniques.
Now in its fourth edition, this comprehensive introduction of fundamental panel data methodologies provides insights on what is most essential in panel literature. A capstone to the forty-year career of a pioneer of panel data analysis, this new edition's primary contribution will be the coverage of advancements in panel data analysis, a statistical method widely used to analyze two or higher-dimensional panel data. The topics discussed in early editions have been reorganized and streamlined to comprehensively introduce panel econometric methodologies useful for identifying causal relationships among variables, supported by interdisciplinary examples and case studies. This book, to be featured in Cambridge's Econometric Society Monographs series, has been the leader in the field since the first edition. It is essential reading for researchers, practitioners and graduate students interested in the analysis of microeconomic behavior.
Today, most money is credit money, created by commercial banks. While credit can finance innovation, excessive credit can lead to boom/bust cycles, such as the recent financial crisis. This highlights how the organization of our monetary system is crucial to stability. One way to achieve this is by separating the unit of account from the medium of exchange and in pre-modern Europe, such a separation existed. This new volume examines this idea of monetary separation and this history of monetary arrangements in the North and Baltic Seas region, from the Hanseatic League onwards. This book provides a theoretical analysis of four historical cases in the Baltic and North Seas region, with a view to examining evolution of monetary arrangements from a new monetary economics perspective. Since the objective exhange value of money (its purchasing power), reflects subjective individual valuations of commodities, the author assesses these historical cases by means of exchange rates. Using theories from new monetary economics , the book explores how the units of account and their media of exchange evolved as social conventions, and offers new insight into the separation between the two. Through this exploration, it puts forward that money is a social institution, a clearing device for the settlement of accounts, and so the value of money, or a separate unit of account, ultimately results from the size of its network of users. The History of Money and Monetary Arrangements offers a highly original new insight into monetary arrangments as an evolutionary process. It will be of great interest to an international audience of scholars and students, including those with an interest in economic history, evolutionary economics and new monetary economics.
Machine learning (ML) is progressively reshaping the fields of quantitative finance and algorithmic trading. ML tools are increasingly adopted by hedge funds and asset managers, notably for alpha signal generation and stocks selection. The technicality of the subject can make it hard for non-specialists to join the bandwagon, as the jargon and coding requirements may seem out of reach. Machine Learning for Factor Investing: R Version bridges this gap. It provides a comprehensive tour of modern ML-based investment strategies that rely on firm characteristics. The book covers a wide array of subjects which range from economic rationales to rigorous portfolio back-testing and encompass both data processing and model interpretability. Common supervised learning algorithms such as tree models and neural networks are explained in the context of style investing and the reader can also dig into more complex techniques like autoencoder asset returns, Bayesian additive trees, and causal models. All topics are illustrated with self-contained R code samples and snippets that are applied to a large public dataset that contains over 90 predictors. The material, along with the content of the book, is available online so that readers can reproduce and enhance the examples at their convenience. If you have even a basic knowledge of quantitative finance, this combination of theoretical concepts and practical illustrations will help you learn quickly and deepen your financial and technical expertise.
This book focuses on quantitative survey methodology, data collection and cleaning methods. Providing starting tools for using and analyzing a file once a survey has been conducted, it addresses fields as diverse as advanced weighting, editing, and imputation, which are not well-covered in corresponding survey books. Moreover, it presents numerous empirical examples from the author's extensive research experience, particularly real data sets from multinational surveys.
This concise textbook presents students with all they need for advancing in mathematical economics. Detailed yet student-friendly, Vohra's book contains chapters in, amongst others: * Feasibility Higher level undergraduates as well as postgraduate students in mathematical economics will find this book extremely useful in their development as economists.
This concise textbook presents students with all they need for advancing in mathematical economics. Detailed yet student-friendly, Vohra's book contains chapters in, amongst others: * Feasibility Higher level undergraduates as well as postgraduate students in mathematical economics will find this book extremely useful in their development as economists.
Quantitative Modeling of Derivative Securities demonstrates how to take the basic ideas of arbitrage theory and apply them - in a very concrete way - to the design and analysis of financial products. Based primarily (but not exclusively) on the analysis of derivatives, the book emphasizes relative-value and hedging ideas applied to different financial instruments. Using a "financial engineering approach," the theory is developed progressively, focusing on specific aspects of pricing and hedging and with problems that the technical analyst or trader has to consider in practice. More than just an introductory text, the reader who has mastered the contents of this one book will have breached the gap separating the novice from the technical and research literature.
Originally published in 1951, this volume reprints the classic work
written by one of the leading global econometricians.
Applied econometrics, known to aficionados as 'metrics, is the original data science. 'Metrics encompasses the statistical methods economists use to untangle cause and effect in human affairs. Through accessible discussion and with a dose of kung fu-themed humor, Mastering 'Metrics presents the essential tools of econometric research and demonstrates why econometrics is exciting and useful. The five most valuable econometric methods, or what the authors call the Furious Five--random assignment, regression, instrumental variables, regression discontinuity designs, and differences in differences--are illustrated through well-crafted real-world examples (vetted for awesomeness by Kung Fu Panda's Jade Palace). Does health insurance make you healthier? Randomized experiments provide answers. Are expensive private colleges and selective public high schools better than more pedestrian institutions? Regression analysis and a regression discontinuity design reveal the surprising truth. When private banks teeter, and depositors take their money and run, should central banks step in to save them? Differences-in-differences analysis of a Depression-era banking crisis offers a response. Could arresting O. J. Simpson have saved his ex-wife's life? Instrumental variables methods instruct law enforcement authorities in how best to respond to domestic abuse. Wielding econometric tools with skill and confidence, Mastering 'Metrics uses data and statistics to illuminate the path from cause to effect. * Shows why econometrics is important* Explains econometric research through humorous and accessible discussion* Outlines empirical methods central to modern econometric practice* Works through interesting and relevant real-world examples
The book examines the development and the dynamics of the personal distribution of income in Germany, Great Britain, Sweden and the United States and some other OECD countries. Starting with the distribution of labour income, the issue is then expanded to include all monetary incomes of private households and to adjust for household size by an equivalence scale. Some authors analyse one country in detail by decomposing aggregate inequality measures, other authors focus on direct comparisons of some features of the income distribution in Germany with those in Great Britain or in the United States. The results suggest dominant influences of unemployment as well as of tax and transfer policies and different welfare regimes, respectively, but also show that our knowledge about distributional processes is still limited.
Collecting and analyzing data on unemployment, inflation, and inequality help describe the complex world around us. When published by the government, such data are called official statistics. They are reported by the media, used by politicians to lend weight to their arguments, and by economic commentators to opine about the state of society. Despite such widescale use, explanations about how these measures are constructed are seldom provided for a non-technical reader. This Measuring Society book is a short, accessible guide to six topics: jobs, house prices, inequality, prices for goods and services, poverty, and deprivation. Each relates to concepts we use on a personal level to form an understanding of the society in which we live: We need a job, a place to live, and food to eat. Using data from the United States, we answer three basic questions: why, how, and for whom these statistics have been constructed. We add some context and flavor by discussing the historical background. This book provides the reader with a good grasp of these measures. Chaitra H. Nagaraja is an Associate Professor of Statistics at the Gabelli School of Business at Fordham University in New York. Her research interests include house price indices and inequality measurement. Prior to Fordham, Dr. Nagaraja was a researcher at the U.S. Census Bureau. While there, she worked on projects relating to the American Community Survey.
|
You may like...
Flash Memory Integration - Performance…
Jalil Boukhobza, Pierre Olivier
Hardcover
R1,831
Discovery Miles 18 310
Intro to Python for Computer Science and…
Paul Deitel
Paperback
Classical and Stochastic Laplacian…
Bjoern Gustafsson, Razvan Teodorescu, …
Hardcover
R1,446
Discovery Miles 14 460
Local Fractional Integral Transforms and…
Xiaojun Yang, Dumitru Baleanu, …
Hardcover
R1,806
Discovery Miles 18 060
Pragmatic Programmer, The - Your journey…
David Thomas, Andrew Hunt
Hardcover
Applications of Functional Analysis and…
V. Hutson, J Pym, …
Hardcover
R6,141
Discovery Miles 61 410
|