Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics > Economic statistics
This volume of Research on Economic Inequality contains research on how we measure poverty, inequality and welfare and how these measurements contribute towards policies for social mobility. The volume contains eleven papers, some of which focus on the uneven impact of the Covid-19 pandemic on poverty and welfare. Opening with debates on theoretical issues that lie at the forefront of the measurement of inequality and poverty literature, the first two chapters go on to propose new methods for measuring wellbeing and inequality in multidimensional categorical environments, and for measuring pro-poor growth in a Bayesian setting. The following three papers present theoretical innovations for measuring poverty and inequality, namely, in estimating the dynamic probability of being poor using a Bayesian approach, and when presented with ordinal variables. The next three chapters are contributions on empirical methods in the measurement of poverty, inclusive economic growth and mobility, with a focus on India, Israel and a unique longitudinal dataset for Chile. The volume concludes with three chapters exploring the impact of the Covid-19 pandemic as an economic shock on income and wealth poverty in EU countries and in an Argentinian city slum.
A Hands-On Approach to Understanding and Using Actuarial Models Computational Actuarial Science with R provides an introduction to the computational aspects of actuarial science. Using simple R code, the book helps you understand the algorithms involved in actuarial computations. It also covers more advanced topics, such as parallel computing and C/C++ embedded codes. After an introduction to the R language, the book is divided into four parts. The first one addresses methodology and statistical modeling issues. The second part discusses the computational facets of life insurance, including life contingencies calculations and prospective life tables. Focusing on finance from an actuarial perspective, the next part presents techniques for modeling stock prices, nonlinear time series, yield curves, interest rates, and portfolio optimization. The last part explains how to use R to deal with computational issues of nonlife insurance. Taking a do-it-yourself approach to understanding algorithms, this book demystifies the computational aspects of actuarial science. It shows that even complex computations can usually be done without too much trouble. Datasets used in the text are available in an R package (CASdatasets).
Recent years have witnessed an explosion in the volume and variety of data collected in all scientific disciplines and industrial settings. Such massive data sets present a number of challenges to researchers in statistics and machine learning. This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level. It includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices - as well as chapters devoted to in-depth exploration of particular model classes - including sparse linear models, matrix models with rank constraints, graphical models, and various types of non-parametric models. With hundreds of worked examples and exercises, this text is intended both for courses and for self-study by graduate students and researchers in statistics, machine learning, and related fields who must understand, apply, and adapt modern statistical methods suited to large-scale data.
The case-control approach is a powerful method for investigating factors that may explain a particular event. It is extensively used in epidemiology to study disease incidence, one of the best-known examples being Bradford Hill and Doll's investigation of the possible connection between cigarette smoking and lung cancer. More recently, case-control studies have been increasingly used in other fields, including sociology and econometrics. With a particular focus on statistical analysis, this book is ideal for applied and theoretical statisticians wanting an up-to-date introduction to the field. It covers the fundamentals of case-control study design and analysis as well as more recent developments, including two-stage studies, case-only studies and methods for case-control sampling in time. The latter have important applications in large prospective cohorts which require case-control sampling designs to make efficient use of resources. More theoretical background is provided in an appendix for those new to the field.
Discover the Benefits of Risk Parity Investing Despite recent progress in the theoretical analysis and practical applications of risk parity, many important fundamental questions still need to be answered. Risk Parity Fundamentals uses fundamental, quantitative, and historical analysis to address these issues, such as: What are the macroeconomic dimensions of risk in risk parity portfolios? What are the appropriate risk premiums in a risk parity portfolio? What are market environments in which risk parity might thrive or struggle? What is the role of leverage in a risk parity portfolio? An experienced researcher and portfolio manager who coined the term "risk parity," the author provides investors with a practical understanding of the risk parity investment approach. Investors will gain insight into the merit of risk parity as well as the practical and underlying aspects of risk parity investing.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policy makers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyze patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented. Contents: Introduction Part I: Summary Tables 1.1. The Manufacturing Sector 1.2. The Manufacturing Branches Part II: Country Tables
Over the last thirty years there has been extensive use of continuous time econometric methods in macroeconomic modelling. This monograph presents a continuous time macroeconometric model of the United Kingdom incorporating stochastic trends. Its development represents a major step forward in continuous time macroeconomic modelling. The book describes the model in detail and, like earlier models, it is designed in such a way as to permit a rigorous mathematical analysis of its steady-state and stability properties, thus providing a valuable check on the capacity of the model to generate plausible long-run behaviour. The model is estimated using newly developed exact Gaussian estimation methods for continuous time econometric models incorporating unobservable stochastic trends. The book also includes discussion of the application of the model to dynamic analysis and forecasting.
This 2004 volume offers a broad overview of developments in the theory and applications of state space modeling. With fourteen chapters from twenty-three contributors, it offers a unique synthesis of state space methods and unobserved component models that are important in a wide range of subjects, including economics, finance, environmental science, medicine and engineering. The book is divided into four sections: introductory papers, testing, Bayesian inference and the bootstrap, and applications. It will give those unfamiliar with state space models a flavour of the work being carried out as well as providing experts with valuable state of the art summaries of different topics. Offering a useful reference for all, this accessible volume makes a significant contribution to the literature of this discipline.
Price and quantity indices are important, much-used measuring instruments, and it is therefore necessary to have a good understanding of their properties. When it was published, this book is the first comprehensive text on index number theory since Irving Fisher's 1922 The Making of Index Numbers. The book covers intertemporal and interspatial comparisons; ratio- and difference-type measures; discrete and continuous time environments; and upper- and lower-level indices. Guided by economic insights, this book develops the instrumental or axiomatic approach. There is no role for behavioural assumptions. In addition to subject matter chapters, two entire chapters are devoted to the rich history of the subject.
This book is intended to provide the reader with a firm conceptual and empirical understanding of basic information-theoretic econometric models and methods. Because most data are observational, practitioners work with indirect noisy observations and ill-posed econometric models in the form of stochastic inverse problems. Consequently, traditional econometric methods in many cases are not applicable for answering many of the quantitative questions that analysts wish to ask. After initial chapters deal with parametric and semiparametric linear probability models, the focus turns to solving nonparametric stochastic inverse problems. In succeeding chapters, a family of power divergence measure likelihood functions are introduced for a range of traditional and nontraditional econometric-model problems. Finally, within either an empirical maximum likelihood or loss context, Ron C. Mittelhammer and George G. Judge suggest a basis for choosing a member of the divergence family.
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
Published in 1932, this is the third edition of an original 1922 volume. The 1922 volume was, in turn, created as the replacement for the Institute of Actuaries Textbook, Part Three, which was the foremost source of knowledge on the subject of life contingencies for over 35 years. Assuming a high level of mathematical knowledge on the part of the reader, it was aimed chiefly at actuarial students and those with a professional interest in the relationship between statistics and mortality. Highly organised and containing numerous mathematical formulae, this book will remain of value to anyone with an interest in risk calculation and the development of the insurance industry.
Logistic models are widely used in economics and other disciplines and are easily available as part of many statistical software packages. This text for graduates, practitioners and researchers in economics, medicine and statistics, which was originally published in 2003, explains the theory underlying logit analysis and gives a thorough explanation of the technique of estimation. The author has provided many empirical applications as illustrations and worked examples. A large data set - drawn from Dutch car ownership statistics - is provided online for readers to practise the techniques they have learned. Several varieties of logit model have been developed independently in various branches of biology, medicine and other disciplines. This book takes its inspiration from logit analysis as it is practised in economics, but it also pays due attention to developments in these other fields.
This book is a collection of essays written in honor of Professor Peter C. B. Phillips of Yale University by some of his former students. The essays analyze a number of important issues in econometrics, all of which Professor Phillips has directly influenced through his seminal scholarly contribution as well as through his remarkable achievements as a teacher. The essays are organized to cover topics in higher-order asymptotics, deficient instruments, nonstationary, LAD and quantile regression, and nonstationary panels. These topics span both theoretical and applied approaches and are intended for use by professionals and advanced graduate students.
Econophysics applies the methodology of physics to the study of economics. However, whilst physicists have good understanding of statistical physics, they may be unfamiliar with recent advances in statistical conjectures, including Bayesian and predictive methods. Equally, economists with knowledge of probabilities do not have a background in statistical physics and agent-based models. Proposing a unified view for a dynamic probabilistic approach, this book is useful for advanced undergraduate and graduate students as well as researchers in physics, economics and finance. The book takes a finitary approach to the subject, discussing the essentials of applied probability, and covering finite Markov chain theory and its applications to real systems. Each chapter ends with a summary, suggestions for further reading, and exercises with solutions at the end of the book.
This 2005 volume contains the papers presented in honor of the lifelong achievements of Thomas J. Rothenberg on the occasion of his retirement. The authors of the chapters include many of the leading econometricians of our day, and the chapters address topics of current research significance in econometric theory. The chapters cover four themes: identification and efficient estimation in econometrics, asymptotic approximations to the distributions of econometric estimators and tests, inference involving potentially nonstationary time series, such as processes that might have a unit autoregressive root, and nonparametric and semiparametric inference. Several of the chapters provide overviews and treatments of basic conceptual issues, while others advance our understanding of the properties of existing econometric procedures and/or propose others. Specific topics include identification in nonlinear models, inference with weak instruments, tests for nonstationary in time series and panel data, generalized empirical likelihood estimation, and the bootstrap.
Originally published in 2005, Weather Derivative Valuation covers all the meteorological, statistical, financial and mathematical issues that arise in the pricing and risk management of weather derivatives. There are chapters on meteorological data and data cleaning, the modelling and pricing of single weather derivatives, the modelling and valuation of portfolios, the use of weather and seasonal forecasts in the pricing of weather derivatives, arbitrage pricing for weather derivatives, risk management, and the modelling of temperature, wind and precipitation. Specific issues covered in detail include the analysis of uncertainty in weather derivative pricing, time-series modelling of daily temperatures, the creation and use of probabilistic meteorological forecasts and the derivation of the weather derivative version of the Black-Scholes equation of mathematical finance. Written by consultants who work within the weather derivative industry, this book is packed with practical information and theoretical insight into the world of weather derivative pricing.
Meaningful use of advanced Bayesian methods requires a good understanding of the fundamentals. This engaging book explains the ideas that underpin the construction and analysis of Bayesian models, with particular focus on computational methods and schemes. The unique features of the text are the extensive discussion of available software packages combined with a brief but complete and mathematically rigorous introduction to Bayesian inference. The text introduces Monte Carlo methods, Markov chain Monte Carlo methods, and Bayesian software, with additional material on model validation and comparison, transdimensional MCMC, and conditionally Gaussian models. The inclusion of problems makes the book suitable as a textbook for a first graduate-level course in Bayesian computation with a focus on Monte Carlo methods. The extensive discussion of Bayesian software - R/R-INLA, OpenBUGS, JAGS, STAN, and BayesX - makes it useful also for researchers and graduate students from beyond statistics.
This volume in Advances in Econometrics showcases fresh methodological and empirical research on the econometrics of networks. Comprising both theoretical, empirical and policy papers, the authors bring together a wide range of perspectives to facilitate a dialogue between academics and practitioners for better understanding this groundbreaking field and its role in policy discussions. This edited collection includes thirteen chapters which covers various topics such as identification of network models, network formation, networks and spatial econometrics and applications of financial networks. Readers can also learn about network models with different types of interactions, sample selection in social networks, trade networks, stochastic dynamic programming in space, spatial panels, survival and networks, financial contagion, spillover effects, interconnectedness on consumer credit markets and a financial risk meter. The topics covered in the book, centered on the econometrics of data and models, are a valuable resource for graduate students and researchers in the field. The collection is also useful for industry professionals and data scientists due its focus on theoretical and applied works.
This book describes the classical axiomatic theories of decision under uncertainty, as well as critiques thereof and alternative theories. It focuses on the meaning of probability, discussing some definitions and surveying their scope of applicability. The behavioral definition of subjective probability serves as a way to present the classical theories, culminating in Savage's theorem. The limitations of this result as a definition of probability lead to two directions - first, similar behavioral definitions of more general theories, such as non-additive probabilities and multiple priors, and second, cognitive derivations based on case-based techniques.
Price and quantity indices are important, much-used measuring instruments, and it is therefore necessary to have a good understanding of their properties. When it was published, this book is the first comprehensive text on index number theory since Irving Fisher's 1922 The Making of Index Numbers. The book covers intertemporal and interspatial comparisons; ratio- and difference-type measures; discrete and continuous time environments; and upper- and lower-level indices. Guided by economic insights, this book develops the instrumental or axiomatic approach. There is no role for behavioural assumptions. In addition to subject matter chapters, two entire chapters are devoted to the rich history of the subject.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
Meta-Regression Analysis in Economics and Business is the first text devoted to the meta-regression analysis (MRA) of economics and business research. The book provides a comprehensive guide to conducting systematic reviews of empirical economics and business research, identifying and explaining the best practices of MRA, and highlighting its problems and pitfalls. These statistical techniques are illustrated using actual data from four published meta-analyses of business and economic research: the effects of unions on productivity, the employment effects of the minimum wage, the value of a statistical life and residential water demand elasticities. While it shares some features in common with these other disciplines, meta-analysis in economics and business faces its own particular challenges and types of research data. This volume guides new researchers from the beginning to the end, from the collection of research to publication of their research. This book will be of great interest to students and researchers in business, economics, marketing, management, and political science, as well as to policy makers.
The advent of "Big Data" has brought with it a rapid diversification of data sources, requiring analysis that accounts for the fact that these data have often been generated and recorded for different reasons. Data integration involves combining data residing in different sources to enable statistical inference, or to generate new statistical data for purposes that cannot be served by each source on its own. This can yield significant gains for scientific as well as commercial investigations. However, valid analysis of such data should allow for the additional uncertainty due to entity ambiguity, whenever it is not possible to state with certainty that the integrated source is the target population of interest. Analysis of Integrated Data aims to provide a solid theoretical basis for this statistical analysis in three generic settings of entity ambiguity: statistical analysis of linked datasets that may contain linkage errors; datasets created by a data fusion process, where joint statistical information is simulated using the information in marginal data from non-overlapping sources; and estimation of target population size when target units are either partially or erroneously covered in each source. Covers a range of topics under an overarching perspective of data integration. Focuses on statistical uncertainty and inference issues arising from entity ambiguity. Features state of the art methods for analysis of integrated data. Identifies the important themes that will define future research and teaching in the statistical analysis of integrated data. Analysis of Integrated Data is aimed primarily at researchers and methodologists interested in statistical methods for data from multiple sources, with a focus on data analysts in the social sciences, and in the public and private sectors.
If you are a manager who receives the results of any data analyst's work to help with your decision-making, this book is for you. Anyone playing a role in the field of analytics can benefit from this book as well. In the two decades the editors of this book spent teaching and consulting in the field of analytics, they noticed a critical shortcoming in the communication abilities of many analytics professionals. Specifically, analysts have difficulty in articulating in business terms what their analyses showed and what actionable recommendations were made. When analysts made presentations, they tended to lapse into the technicalities of mathematical procedures, rather than focusing on the strategic and tactical impact and meaning of their work. As analytics has become more mainstream and widespread in organizations, this problem has grown more acute. Data Analytics: Effective Methods for Presenting Results tackles this issue. The editors have used their experience as presenters and audience members who have become lost during presentation. Over the years, they experimented with different ways of presenting analytics work to make a more compelling case to top managers. They have discovered tried and true methods for improving presentations, which they share. The book also presents insights from other analysts and managers who share their own experiences. It is truly a collection of experiences and insight from academics and professionals involved with analytics. The book is not a primer on how to draw the most beautiful charts and graphs or about how to perform any specific kind of analysis. Rather, it shares the experiences of professionals in various industries about how they present their analytics results effectively. They tell their stories on how to win over audiences. The book spans multiple functional areas within a business, and in some cases, it discusses how to adapt presentations to the needs of audiences at different levels of management. |
You may like...
Kwantitatiewe statistiese tegnieke
Swanepoel Swanepoel, Vivier Vivier, …
Book
Beyond Bitcoin - Decentralised Finance…
Steven Boykey Sidley, Simon Dingle
Paperback
(1)
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
The Pay Off - How Changing The Way We…
Gottfried Leibbrandt, Natasha De Teran
Paperback
Risky Business - Why Insurance Markets…
Liran Einav, Amy Finkelstein, …
Paperback
The Dynamics of Industrial Collaboration…
Anne Plunket, Colette Voisin, …
Hardcover
R3,366
Discovery Miles 33 660
|