![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This volume in Advances in Econometrics showcases fresh methodological and empirical research on the econometrics of networks. Comprising both theoretical, empirical and policy papers, the authors bring together a wide range of perspectives to facilitate a dialogue between academics and practitioners for better understanding this groundbreaking field and its role in policy discussions. This edited collection includes thirteen chapters which covers various topics such as identification of network models, network formation, networks and spatial econometrics and applications of financial networks. Readers can also learn about network models with different types of interactions, sample selection in social networks, trade networks, stochastic dynamic programming in space, spatial panels, survival and networks, financial contagion, spillover effects, interconnectedness on consumer credit markets and a financial risk meter. The topics covered in the book, centered on the econometrics of data and models, are a valuable resource for graduate students and researchers in the field. The collection is also useful for industry professionals and data scientists due its focus on theoretical and applied works.
Self-contained chapters on the most important applications and methodologies in finance, which can easily be used for the reader’s research or as a reference for courses on empirical finance. Each chapter is reproducible in the sense that the reader can replicate every single figure, table, or number by simply copy-pasting the code we provide. A full-fledged introduction to machine learning with tidymodels based on tidy principles to show how factor selection and option pricing can benefit from Machine Learning methods. Chapter 2 on accessing & managing financial data shows how to retrieve and prepare the most important datasets in the field of financial economics: CRSP and Compustat. The chapter also contains detailed explanations of the most important data characteristics. Each chapter provides exercises that are based on established lectures and exercise classes and which are designed to help students to dig deeper. The exercises can be used for self-studying or as source of inspiration for teaching exercises.
Essentials of Time Series for Financial Applications serves as an agile reference for upper level students and practitioners who desire a formal, easy-to-follow introduction to the most important time series methods applied in financial applications (pricing, asset management, quant strategies, and risk management). Real-life data and examples developed with EViews illustrate the links between the formal apparatus and the applications. The examples either directly exploit the tools that EViews makes available or use programs that by employing EViews implement specific topics or techniques. The book balances a formal framework with as few proofs as possible against many examples that support its central ideas. Boxes are used throughout to remind readers of technical aspects and definitions and to present examples in a compact fashion, with full details (workout files) available in an on-line appendix. The more advanced chapters provide discussion sections that refer to more advanced textbooks or detailed proofs.
Medicine Price Surveys, Analyses and Comparisons establishes guidelines for the study and implementation of pharmaceutical price surveys, analyses, and comparisons. Its contributors evaluate price survey literature, discuss the accessibility and reliability of data sources, and provide a checklist and training kit on conducting price surveys, analyses, and comparisons. Their investigations survey price studies while accounting for the effects of methodologies and explaining regional differences in medicine prices. They also consider policy objectives such as affordable access to medicines and cost-containment as well as options for improving the effectiveness of policies.
The second book in a set of ten on quantitative finance for practitioners Presents the theory needed to better understand applications Supplements previous training in mathematics Built from the author's four decades of experience in industry, research, and teaching
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
Military organizations around the world are normally huge producers and consumers of data. Accordingly, they stand to gain from the many benefits associated with data analytics. However, for leaders in defense organizations-either government or industry-accessible use cases are not always available. This book presents a diverse collection of cases that explore the realm of possibilities in military data analytics. These use cases explore such topics as: Context for maritime situation awareness Data analytics for electric power and energy applications Environmental data analytics in military operations Data analytics and training effectiveness evaluation Harnessing single board computers for military data analytics Analytics for military training in virtual reality environments A chapter on using single board computers explores their application in a variety of domains, including wireless sensor networks, unmanned vehicles, and cluster computing. The investigation into a process for extracting and codifying expert knowledge provides a practical and useful model for soldiers that can support diagnostics, decision making, analysis of alternatives, and myriad other analytical processes. Data analytics is seen as having a role in military learning, and a chapter in the book describes the ongoing work with the United States Army Research Laboratory to apply data analytics techniques to the design of courses, evaluation of individual and group performances, and the ability to tailor the learning experience to achieve optimal learning outcomes in a minimum amount of time. Another chapter discusses how virtual reality and analytics are transforming training of military personnel. Virtual reality and analytics are also transforming monitoring, decision making, readiness, and operations. Military Applications of Data Analytics brings together a collection of technical and application-oriented use cases. It enables decision makers and technologists to make connections between data analytics and such fields as virtual reality and cognitive science that are driving military organizations around the world forward.
The book describes the theoretical principles of nonstatistical methods of data analysis but without going deep into complex mathematics. The emphasis is laid on presentation of solved examples of real data either from authors' laboratories or from open literature. The examples cover wide range of applications such as quality assurance and quality control, critical analysis of experimental data, comparison of data samples from various sources, robust linear and nonlinear regression as well as various tasks from financial analysis. The examples are useful primarily for chemical engineers including analytical/quality laboratories in industry, designers of chemical and biological processes. Features: Exclusive title on Mathematical Gnostics with multidisciplinary applications, and specific focus on chemical engineering. Clarifies the role of data space metrics including the right way of aggregation of uncertain data. Brings a new look on the data probability, information, entropy and thermodynamics of data uncertainty. Enables design of probability distributions for all real data samples including smaller ones. Includes data for examples with solutions with exercises in R or Python. The book is aimed for Senior Undergraduate Students, Researchers, and Professionals in Chemical/Process Engineering, Engineering Physics, Stats, Mathematics, Materials, Geotechnical, Civil Engineering, Mining, Sales, Marketing and Service, and Finance.
Operation Research methods are often used in every field of modern life like industry, economy and medicine. The authors have compiled of the latest advancements in these methods in this volume comprising some of what is considered the best collection of these new approaches. These can be counted as a direct shortcut to what you may search for. This book provides useful applications of the new developments in OR written by leading scientists from some international universities. Another volume about exciting applications of Operations Research is planned in the near future. We hope you enjoy and benefit from this series!
Features Accessible to readers with a basic background in probability and statistics Covers fundamental concepts of experimental design and cause-effect relationships Introduces classical ANOVA models, including contrasts and multiple testing Provides an example-based introduction to mixed models Features basic concepts of split-plot and incomplete block designs R code available for all steps Supplementary website with additional resources and updates
Getting Data Science Done outlines the essential stages in running successful data science projects-providing comprehensive guidelines to help you identify potential issues and then a range of strategies for mitigating them. Data science is a field that synthesizes statistics, computer science and business analytics to deliver results that can impact almost any type of process or organization. Data science is also an evolving technical discipline, whose practice is full of pitfalls and potential problems for managers, stakeholders and practitioners. Many organizations struggle to consistently deliver results with data science due to a wide range of issues, including knowledge barriers, problem framing, organizational change and integration with IT and engineering. Getting Data Science Done outlines the essential stages in running successful data science projects. The book provides comprehensive guidelines to help you identify potential issues and then a range of strategies for mitigating them. The book is organized as a sequential process allowing the reader to work their way through a project from an initial idea all the way to a deployed and integrated product.
Volume 40 in the Advances in Econometrics series features twenty-three chapters that are split thematically into two parts. Part A presents novel contributions to the analysis of time series and panel data with applications in macroeconomics, finance, cognitive science and psychology, neuroscience, and labor economics. Part B examines innovations in stochastic frontier analysis, nonparametric and semiparametric modeling and estimation, A/B experiments, big-data analysis, and quantile regression. Individual chapters, written by both distinguished researchers and promising young scholars, cover many important topics in statistical and econometric theory and practice. Papers primarily, though not exclusively, adopt Bayesian methods for estimation and inference, although researchers of all persuasions should find considerable interest in the chapters contained in this work. The volume was prepared to honor the career and research contributions of Professor Dale J. Poirier. For researchers in econometrics, this volume includes the most up-to-date research across a wide range of topics.
An engaging and accessible examination of what ails insurance markets—and what to do about it—by three leading economists. Why is dental insurance so crummy? Why is pet insurance so expensive? Why does your auto insurer ask for your credit score? The answer to these questions lies in understanding how insurance works. Unlike the market for other goods and services—for instance, a grocer who doesn’t care who buys the store’s broccoli or carrots—insurance providers are more careful in choosing their customers, because some are more expensive than others. Unraveling the mysteries of insurance markets, Liran Einav, Amy Finkelstein, and Ray Fisman explore such issues as why insurers want to know so much about us and whether we should let them obtain this information; why insurance entrepreneurs often fail (and some tricks that may help them succeed); and whether we’d be better off with government-mandated health insurance instead of letting businesses, customers, and markets decide who gets coverage and at what price. With insurance at the center of divisive debates about privacy, equity, and the appropriate role of government, this book offers clear explanations for some of the critical business and policy issues you’ve often wondered about, as well as for others you haven’t yet considered.
Tackling the cybersecurity challenge is a matter of survival for society at large. Cyber attacks are rapidly increasing in sophistication and magnitude-and in their destructive potential. New threats emerge regularly, the last few years having seen a ransomware boom and distributed denial-of-service attacks leveraging the Internet of Things. For organisations, the use of cybersecurity risk management is essential in order to manage these threats. Yet current frameworks have drawbacks which can lead to the suboptimal allocation of cybersecurity resources. Cyber insurance has been touted as part of the solution - based on the idea that insurers can incentivize companies to improve their cybersecurity by offering premium discounts - but cyber insurance levels remain limited. This is because companies have difficulty determining which cyber insurance products to purchase, and insurance companies struggle to accurately assess cyber risk and thus develop cyber insurance products. To deal with these challenges, this volume presents new models for cybersecurity risk management, partly based on the use of cyber insurance. It contains: A set of mathematical models for cybersecurity risk management, including (i) a model to assist companies in determining their optimal budget allocation between security products and cyber insurance and (ii) a model to assist insurers in designing cyber insurance products. The models use adversarial risk analysis to account for the behavior of threat actors (as well as the behavior of companies and insurers). To inform these models, we draw on psychological and behavioural economics studies of decision-making by individuals regarding cybersecurity and cyber insurance. We also draw on organizational decision-making studies involving cybersecurity and cyber insurance. Its theoretical and methodological findings will appeal to researchers across a wide range of cybersecurity-related disciplines including risk and decision analysis, analytics, technology management, actuarial sciences, behavioural sciences, and economics. The practical findings will help cybersecurity professionals and insurers enhance cybersecurity and cyber insurance, thus benefiting society as a whole. This book grew out of a two-year European Union-funded project under Horizons 2020, called CYBECO (Supporting Cyber Insurance from a Behavioral Choice Perspective).
In this monograph the authors give a systematic approach to the probabilistic properties of the fixed point equation X=AX+B. A probabilistic study of the stochastic recurrence equation X_t=A_tX_{t-1}+B_t for real- and matrix-valued random variables A_t, where (A_t,B_t) constitute an iid sequence, is provided. The classical theory for these equations, including the existence and uniqueness of a stationary solution, the tail behavior with special emphasis on power law behavior, moments and support, is presented. The authors collect recent asymptotic results on extremes, point processes, partial sums (central limit theory with special emphasis on infinite variance stable limit theory), large deviations, in the univariate and multivariate cases, and they further touch on the related topics of smoothing transforms, regularly varying sequences and random iterative systems. The text gives an introduction to the Kesten-Goldie theory for stochastic recurrence equations of the type X_t=A_tX_{t-1}+B_t. It provides the classical results of Kesten, Goldie, Guivarc'h, and others, and gives an overview of recent results on the topic. It presents the state-of-the-art results in the field of affine stochastic recurrence equations and shows relations with non-affine recursions and multivariate regular variation.
Today, public conversations are increasingly driven by numbers. Although charts, infographics, and diagrams can make us wiser, they can also deceive-intentionally or unintentionally. To be informed citizens, we must all be able to decode and use the visual information that politicians, journalists and even our employers present to us each day. How Charts Lie examines contemporary examples ranging from election result infographics to global GDP maps and box office record charts, demystifying an essential new literacy for our data-driven world. * With a new afterword on the reporting of the Covid-19 statistics.
The book evaluates the importance of constitutional rules and property rights for the German economy in 1990-2015. It is an economic historical study embedded in institutional economics with main references to positive constitutional economics and the property rights theory. This interdisciplinary work adopts a theoretical-empirical dimension and a qualitative-quantitative approach. Formal institutions played a fundamental role in Germany's post-reunification economic changes. They set the legal and institutional framework for the transition process of Eastern Germany and the unification, integration and convergence between the two parts of the country. Although the latter process was not completed, the effects of these formal rules were positive, especially for the former GDR.
A unique and comprehensive source of information, the International Yearbook of Industrial Statistics is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector.Covering more than 120 countries/areas, the 1996 edition of the Yearbook contains data which are internationally comparable and much more detailed in industrial classification than those supplied in previous publications. This is the second issue of the annual publication which succeeds the UNIDO's Handbook of Industrial Statistics and, at the same time, replaces the United Nation's Industrial Statistics Yearbook, volume I (General Industrial Statistics). Information has been collected directly from national statistical sources and supplemented with estimates by UNIDO. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial performance. It provides data which can be used to analyse patterns of growth, structural change and industrial performance in individual industries. Data on employment trends, wages and other key indicators are also presented. Finally, the detailed information presented here enables the user to study different aspects of industry which was not possible using the aggregate data previously available.
This book is summarizing the results of the workshop "Uniform Distribution and Quasi-Monte Carlo Methods" of the RICAM Special Semester on "Applications of Algebra and Number Theory" in October 2013. The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology. The goal of this book is to give an overview of recent developments in uniform distribution theory, quasi-Monte Carlo methods, and their applications, presented by leading experts in these vivid fields of research.
Doing Statistical Analysis looks at three kinds of statistical research questions - descriptive, associational, and inferential - and shows students how to conduct statistical analyses and interpret the results. Keeping equations to a minimum, it uses a conversational style and relatable examples such as football, COVID-19, and tourism, to aid understanding. Each chapter contains practice exercises, and a section showing students how to reproduce the statistical results in the book using Stata and SPSS. Digital supplements consist of data sets in Stata, SPSS, and Excel, and a test bank for instructors. Its accessible approach means this is the ideal textbook for undergraduate students across the social and behavioral sciences needing to build their confidence with statistical analysis.
Ranking of Multivariate Populations: A Permutation Approach with Applications presents a novel permutation-based nonparametric approach for ranking several multivariate populations. Using data collected from both experimental and observation studies, it covers some of the most useful designs widely applied in research and industry investigations, such as multivariate analysis of variance (MANOVA) and multivariate randomized complete block (MRCB) designs. The first section of the book introduces the topic of ranking multivariate populations by presenting the main theoretical ideas and an in-depth literature review. The second section discusses a large number of real case studies from four specific research areas: new product development in industry, perceived quality of the indoor environment, customer satisfaction, and cytological and histological analysis by image processing. A web-based nonparametric combination global ranking software is also described. Designed for practitioners and postgraduate students in statistics and the applied sciences, this application-oriented book offers a practical guide to the reliable global ranking of multivariate items, such as products, processes, and services, in terms of the performance of all investigated products/prototypes.
In the modern world, data is a vital asset for any organization, regardless of industry or size. The world is built upon data. However, data without knowledge is useless. The aim of this book, briefly, is to introduce new approaches that can be used to shape and forecast the future by combining the two disciplines of Statistics and Economics.Readers of Modeling and Advanced Techniques in Modern Economics can find valuable information from a diverse group of experts on topics such as finance, econometric models, stochastic financial models and machine learning, and application of models to financial and macroeconomic data.
This book addresses one of the most important research activities in empirical macroeconomics. It provides a course of advanced but intuitive methods and tools enabling the spatial and temporal disaggregation of basic macroeconomic variables and the assessment of the statistical uncertainty of the outcomes of disaggregation. The empirical analysis focuses mainly on GDP and its growth in the context of Poland. However, all of the methods discussed can be easily applied to other countries. The approach used in the book views spatial and temporal disaggregation as a special case of the estimation of missing observations (a topic on missing data analysis). The book presents an econometric course of models of Seemingly Unrelated Regression Equations (SURE). The main advantage of using the SURE specification is to tackle the presented research problem so that it allows for the heterogeneity of the parameters describing relations between macroeconomic indicators. The book contains model specification, as well as descriptions of stochastic assumptions and resulting procedures of estimation and testing. The method also addresses uncertainty in the estimates produced. All of the necessary tests and assumptions are presented in detail. The results are designed to serve as a source of invaluable information making regional analyses more convenient and - more importantly - comparable. It will create a solid basis for making conclusions and recommendations concerning regional economic policy in Poland, particularly regarding the assessment of the economic situation. This is essential reading for academics, researchers, and economists with regional analysis as their field of expertise, as well as central bankers and policymakers.
Praise for the first edition: [This book] reflects the extensive experience and significant contributions of the author to non-linear and non-Gaussian modeling. ... [It] is a valuable book, especially with its broad and accessible introduction of models in the state-space framework. -Statistics in Medicine What distinguishes this book from comparable introductory texts is the use of state-space modeling. Along with this come a number of valuable tools for recursive filtering and smoothing, including the Kalman filter, as well as non-Gaussian and sequential Monte Carlo filters. -MAA Reviews Introduction to Time Series Modeling with Applications in R, Second Edition covers numerous stationary and nonstationary time series models and tools for estimating and utilizing them. The goal of this book is to enable readers to build their own models to understand, predict and master time series. The second edition makes it possible for readers to reproduce examples in this book by using the freely available R package TSSS to perform computations for their own real-world time series problems. This book employs the state-space model as a generic tool for time series modeling and presents the Kalman filter, the non-Gaussian filter and the particle filter as convenient tools for recursive estimation for state-space models. Further, it also takes a unified approach based on the entropy maximization principle and employs various methods of parameter estimation and model selection, including the least squares method, the maximum likelihood method, recursive estimation for state-space models and model selection by AIC. Along with the standard stationary time series models, such as the AR and ARMA models, the book also introduces nonstationary time series models such as the locally stationary AR model, the trend model, the seasonal adjustment model, the time-varying coefficient AR model and nonlinear non-Gaussian state-space models. About the Author: Genshiro Kitagawa is a project professor at the University of Tokyo, the former Director-General of the Institute of Statistical Mathematics, and the former President of the Research Organization of Information and Systems.
|
You may like...
|