Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Business & Economics > Economics > Econometrics > Economic statistics
This textbook provides future data analysts with the tools, methods, and skills needed to answer data-focused, real-life questions; to carry out data analysis; and to visualize and interpret results to support better decisions in business, economics, and public policy. Data wrangling and exploration, regression analysis, machine learning, and causal analysis are comprehensively covered, as well as when, why, and how the methods work, and how they relate to each other. As the most effective way to communicate data analysis, running case studies play a central role in this textbook. Each case starts with an industry-relevant question and answers it by using real-world data and applying the tools and methods covered in the textbook. Learning is then consolidated by 360 practice questions and 120 data exercises. Extensive online resources, including raw and cleaned data and codes for all analysis in Stata, R, and Python, can be found at www.gabors-data-analysis.com.
The substantially updated third edition of the popular Actuarial Mathematics for Life Contingent Risks is suitable for advanced undergraduate and graduate students of actuarial science, for trainee actuaries preparing for professional actuarial examinations, and for life insurance practitioners who wish to increase or update their technical knowledge. The authors provide intuitive explanations alongside mathematical theory, equipping readers to understand the material in sufficient depth to apply it in real-world situations and to adapt their results in a changing insurance environment. Topics include modern actuarial paradigms, such as multiple state models, cash-flow projection methods and option theory, all of which are required for managing the increasingly complex range of contemporary long-term insurance products. Numerous exam-style questions allow readers to prepare for traditional professional actuarial exams, and extensive use of Excel ensures that readers are ready for modern, Excel-based exams and for the actuarial work environment. The Solutions Manual (ISBN 9781108747615), available for separate purchase, provides detailed solutions to the text's exercises.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams.
Who decides how official statistics are produced? Do politicians have control or are key decisions left to statisticians in independent statistical agencies? Interviews with statisticians in Australia, Canada, Sweden, the UK and the USA were conducted to get insider perspectives on the nature of decision making in government statistical administration. While the popular adage suggests there are 'lies, damned lies and statistics', this research shows that official statistics in liberal democracies are far from mistruths; they are consistently insulated from direct political interference. Yet, a range of subtle pressures and tensions exist that governments and statisticians must manage. The power over statistics is distributed differently in different countries, and this book explains why. Differences in decision-making powers across countries are the result of shifting pressures politicians and statisticians face to be credible, and the different national contexts that provide distinctive institutional settings for the production of government numbers.
Time Series: A First Course with Bootstrap Starter provides an introductory course on time series analysis that satisfies the triptych of (i) mathematical completeness, (ii) computational illustration and implementation, and (iii) conciseness and accessibility to upper-level undergraduate and M.S. students. Basic theoretical results are presented in a mathematically convincing way, and the methods of data analysis are developed through examples and exercises parsed in R. A student with a basic course in mathematical statistics will learn both how to analyze time series and how to interpret the results. The book provides the foundation of time series methods, including linear filters and a geometric approach to prediction. The important paradigm of ARMA models is studied in-depth, as well as frequency domain methods. Entropy and other information theoretic notions are introduced, with applications to time series modeling. The second half of the book focuses on statistical inference, the fitting of time series models, as well as computational facets of forecasting. Many time series of interest are nonlinear in which case classical inference methods can fail, but bootstrap methods may come to the rescue. Distinctive features of the book are the emphasis on geometric notions and the frequency domain, the discussion of entropy maximization, and a thorough treatment of recent computer-intensive methods for time series such as subsampling and the bootstrap. There are more than 600 exercises, half of which involve R coding and/or data analysis. Supplements include a website with 12 key data sets and all R code for the book's examples, as well as the solutions to exercises.
As one of the first texts to take a behavioral approach to macroeconomic expectations, this book introduces a new way of doing economics. Roetheli uses cognitive psychology in a bottom-up method of modeling macroeconomic expectations. His research is based on laboratory experiments and historical data, which he extends to real-world situations. Pattern extrapolation is shown to be the key to understanding expectations of inflation and income. The quantitative model of expectations is used to analyze the course of inflation and nominal interest rates in a range of countries and historical periods. The model of expected income is applied to the analysis of business cycle phenomena such as the great recession in the United States. Data and spreadsheets are provided for readers to do their own computations of macroeconomic expectations. This book offers new perspectives in many areas of macro and financial economics.
Virtually any random process developing chronologically can be viewed as a time series. In economics closing prices of stocks, the cost of money, the jobless rate, and retail sales are just a few examples of many. Developed from course notes and extensively classroom-tested, Applied Time Series Analysis with R, Second Edition includes examples across a variety of fields, develops theory, and provides an R-based software package to aid in addressing time series problems in a broad spectrum of fields. The material is organized in an optimal format for graduate students in statistics as well as in the natural and social sciences to learn to use and understand the tools of applied time series analysis. Features Gives readers the ability to actually solve significant real-world problems Addresses many types of nonstationary time series and cutting-edge methodologies Promotes understanding of the data and associated models rather than viewing it as the output of a "black box" Provides the R package tswge available on CRAN which contains functions and over 100 real and simulated data sets to accompany the book. Extensive help regarding the use of tswge functions is provided in appendices and on an associated website. Over 150 exercises and extensive support for instructors The second edition includes additional real-data examples, uses R-based code that helps students easily analyze data, generate realizations from models, and explore the associated characteristics. It also adds discussion of new advances in the analysis of long memory data and data with time-varying frequencies (TVF).
This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. These data-driven models seek to replace the "classical " parametric models of the past, which were rigid and often linear. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures. They provide a balanced view of new developments in the analysis and modeling of applied sciences with cross-section, time series, panel, and spatial data sets. The major topics of the volume include: the methodology of semiparametric models and special regressor methods; inverse, ill-posed, and well-posed problems; different methodologies related to additive models; sieve regression estimators, nonparametric and semiparametric regression models, and the true error of competing approximate models; support vector machines and their modeling of default probability; series estimation of stochastic processes and some of their applications in Econometrics; identification, estimation, and specification problems in a class of semilinear time series models; nonparametric and semiparametric techniques applied to nonstationary or near nonstationary variables; the estimation of a set of regression equations; and a new approach to the analysis of nonparametric models with exogenous treatment assignment.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.
Bjørn Lomborg, a former member of Greenpeace, challenges widely held beliefs that the world environmental situation is getting worse and worse in his new book, The Skeptical Environmentalist. Using statistical information from internationally recognized research institutes, Lomborg systematically examines a range of major environmental issues that feature prominently in headline news around the world, including pollution, biodiversity, fear of chemicals, and the greenhouse effect, and documents that the world has actually improved. He supports his arguments with over 2500 footnotes, allowing readers to check his sources. Lomborg criticizes the way many environmental organizations make selective and misleading use of scientific evidence and argues that we are making decisions about the use of our limited resources based on inaccurate or incomplete information. Concluding that there are more reasons for optimism than pessimism, he stresses the need for clear-headed prioritization of resources to tackle real, not imagined, problems. The Skeptical Environmentalist offers readers a non-partisan evaluation that serves as a useful corrective to the more alarmist accounts favored by campaign groups and the media. Bjørn Lomborg is an associate professor of statistics in the Department of Political Science at the University of Aarhus. When he started to investigate the statistics behind the current gloomy view of the environment, he was genuinely surprised. He published four lengthy articles in the leading Danish newspaper, including statistics documenting an ever-improving world, and unleashed the biggest post-war debate with more than 400 articles in all the major papers. Since then, Lomborg has been a frequent participant in the European debate on environmentalism on television, radio, and in newspapers.
A comprehensive account of economic size distributions around the world and throughout the years In the course of the past 100 years, economists and applied statisticians have developed a remarkably diverse variety of income distribution models, yet no single resource convincingly accounts for all of these models, analyzing their strengths and weaknesses, similarities and differences. Statistical Size Distributions in Economics and Actuarial Sciences is the first collection to systematically investigate a wide variety of parametric models that deal with income, wealth, and related notions. Christian Kleiber and Samuel Kotz survey, compliment, compare, and unify all of the disparate models of income distribution, highlighting at times a lack of coordination between them that can result in unnecessary duplication. Considering models from eight languages and all continents, the authors discuss the social and economic implications of each as well as distributions of size of loss in actuarial applications. Specific models covered include:
Three appendices provide brief biographies of some of the leading players along with the basic properties of each of the distributions. Actuaries, economists, market researchers, social scientists, and physicists interested in econophysics will find Statistical Size Distributions in Economics and Actuarial Sciences to be a truly one-of-a-kind addition to the professional literature.
Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.
'A manual for the 21st-century citizen... accessible, refreshingly critical, relevant and urgent' - Financial Times 'Fascinating and deeply disturbing' - Yuval Noah Harari, Guardian Books of the Year In this New York Times bestseller, Cathy O'Neil, one of the first champions of algorithmic accountability, sounds an alarm on the mathematical models that pervade modern life -- and threaten to rip apart our social fabric. We live in the age of the algorithm. Increasingly, the decisions that affect our lives - where we go to school, whether we get a loan, how much we pay for insurance - are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: everyone is judged according to the same rules, and bias is eliminated. And yet, as Cathy O'Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and incontestable, even when they're wrong. Most troubling, they reinforce discrimination. Tracing the arc of a person's life, O'Neil exposes the black box models that shape our future, both as individuals and as a society. These "weapons of math destruction" score teachers and students, sort CVs, grant or deny loans, evaluate workers, target voters, and monitor our health. O'Neil calls on modellers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it's up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.
A comprehensive and up-to-date introduction to the mathematics that all economics students need to know Probability theory is the quantitative language used to handle uncertainty and is the foundation of modern statistics. Probability and Statistics for Economists provides graduate and PhD students with an essential introduction to mathematical probability and statistical theory, which are the basis of the methods used in econometrics. This incisive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of the mathematics that every economist needs to know. Covers probability and statistics with mathematical rigor while emphasizing intuitive explanations that are accessible to economics students of all backgrounds Discusses random variables, parametric and multivariate distributions, sampling, the law of large numbers, central limit theory, maximum likelihood estimation, numerical optimization, hypothesis testing, and more Features hundreds of exercises that enable students to learn by doing Includes an in-depth appendix summarizing important mathematical results as well as a wealth of real-world examples Can serve as a core textbook for a first-semester PhD course in econometrics and as a companion book to Bruce E. Hansen's Econometrics Also an invaluable reference for researchers and practitioners
New statistical methods and future directions of research in time series A Course in Time Series Analysis demonstrates how to build time series models for univariate and multivariate time series data. It brings together material previously available only in the professional literature and presents a unified view of the most advanced procedures available for time series model building. The authors begin with basic concepts in univariate time series, providing an up-to-date presentation of ARIMA models, including the Kalman filter, outlier analysis, automatic methods for building ARIMA models, and signal extraction. They then move on to advanced topics, focusing on heteroscedastic models, nonlinear time series models, Bayesian time series analysis, nonparametric time series analysis, and neural networks. Multivariate time series coverage includes presentations on vector ARMA models, cointegration, and multivariate linear systems. Special features include:
Requiring no previous knowledge of the subject, A Course in Time Series Analysis is an important reference and a highly useful resource for researchers and practitioners in statistics, economics, business, engineering, and environmental analysis.
Recent years have witnessed an explosion in the volume and variety of data collected in all scientific disciplines and industrial settings. Such massive data sets present a number of challenges to researchers in statistics and machine learning. This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level. It includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices - as well as chapters devoted to in-depth exploration of particular model classes - including sparse linear models, matrix models with rank constraints, graphical models, and various types of non-parametric models. With hundreds of worked examples and exercises, this text is intended both for courses and for self-study by graduate students and researchers in statistics, machine learning, and related fields who must understand, apply, and adapt modern statistical methods suited to large-scale data.
If you know a little bit about financial mathematics but don't yet know a lot about programming, then C++ for Financial Mathematics is for you. C++ is an essential skill for many jobs in quantitative finance, but learning it can be a daunting prospect. This book gathers together everything you need to know to price derivatives in C++ without unnecessary complexities or technicalities. It leads the reader step-by-step from programming novice to writing a sophisticated and flexible financial mathematics library. At every step, each new idea is motivated and illustrated with concrete financial examples. As employers understand, there is more to programming than knowing a computer language. As well as covering the core language features of C++, this book teaches the skills needed to write truly high quality software. These include topics such as unit tests, debugging, design patterns and data structures. The book teaches everything you need to know to solve realistic financial problems in C++. It can be used for self-study or as a textbook for an advanced undergraduate or master's level course.
Computational finance is increasingly important in the financial industry, as a necessary instrument for applying theoretical models to real-world challenges. Indeed, many models used in practice involve complex mathematical problems, for which an exact or a closed-form solution is not available. Consequently, we need to rely on computational techniques and specific numerical algorithms. This book combines theoretical concepts with practical implementation. Furthermore, the numerical solution of models is exploited, both to enhance the understanding of some mathematical and statistical notions, and to acquire sound programming skills in MATLAB (R), which is useful for several other programming languages also. The material assumes the reader has a relatively limited knowledge of mathematics, probability, and statistics. Hence, the book contains a short description of the fundamental tools needed to address the two main fields of quantitative finance: portfolio selection and derivatives pricing. Both fields are developed here, with a particular emphasis on portfolio selection, where the author includes an overview of recent approaches. The book gradually takes the reader from a basic to medium level of expertise by using examples and exercises to simplify the understanding of complex models in finance, giving them the ability to place financial models in a computational setting. The book is ideal for courses focusing on quantitative finance, asset management, mathematical methods for economics and finance, investment banking, and corporate finance.
Modern economies are full of uncertainties and risk. Economics studies resource allocations in an uncertain market environment. As a generally applicable quantitative analytic tool for uncertain events, probability and statistics have been playing an important role in economic research. Econometrics is statistical analysis of economic and financial data. In the past four decades or so, economics has witnessed a so-called 'empirical revolution' in its research paradigm, and as the main methodology in empirical studies in economics, econometrics has been playing an important role. It has become an indispensable part of training in modern economics, business and management.This book develops a coherent set of econometric theory, methods and tools for economic models. It is written as a textbook for graduate students in economics, business, management, statistics, applied mathematics, and related fields. It can also be used as a reference book on econometric theory by scholars who may be interested in both theoretical and applied econometrics.
In recent years, interest in rigorous impact evaluation has grown tremendously in policy-making, economics, public health, social sciences and international relations. Evidence-based policy-making has become a recurring theme in public policy, alongside greater demands for accountability in public policies and public spending, and requests for independent and rigorous impact evaluations for policy evidence. Froelich and Sperlich offer a comprehensive and up-to-date approach to quantitative impact evaluation analysis, also known as causal inference or treatment effect analysis, illustrating the main approaches for identification and estimation: experimental studies, randomization inference and randomized control trials (RCTs), matching and propensity score matching and weighting, instrumental variable estimation, difference-in-differences, regression discontinuity designs, quantile treatment effects, and evaluation of dynamic treatments. The book is designed for economics graduate courses but can also serve as a manual for professionals in research institutes, governments, and international organizations, evaluating the impact of a wide range of public policies in health, environment, transport and economic development.
An engaging and accessible examination of what ails insurance markets—and what to do about it—by three leading economists. Why is dental insurance so crummy? Why is pet insurance so expensive? Why does your auto insurer ask for your credit score? The answer to these questions lies in understanding how insurance works. Unlike the market for other goods and services—for instance, a grocer who doesn’t care who buys the store’s broccoli or carrots—insurance providers are more careful in choosing their customers, because some are more expensive than others. Unraveling the mysteries of insurance markets, Liran Einav, Amy Finkelstein, and Ray Fisman explore such issues as why insurers want to know so much about us and whether we should let them obtain this information; why insurance entrepreneurs often fail (and some tricks that may help them succeed); and whether we’d be better off with government-mandated health insurance instead of letting businesses, customers, and markets decide who gets coverage and at what price. With insurance at the center of divisive debates about privacy, equity, and the appropriate role of government, this book offers clear explanations for some of the critical business and policy issues you’ve often wondered about, as well as for others you haven’t yet considered.
'Fascinating . . . timely' Daily Mail 'Refreshingly clear and engaging' Tim Harford 'Delightful . . . full of unique insights' Prof Sir David Spiegelhalter There's no getting away from statistics. We encounter them every day. We are all users of statistics whether we like it or not. Do missed appointments really cost the NHS GBP1bn per year? What's the difference between the mean gender pay gap and the median gender pay gap? How can we work out if a claim that we use 42 billion single-use plastic straws per year in the UK is accurate? What did the Vote Leave campaign's GBP350m bus really mean? How can we tell if the headline 'Public pensions cost you GBP4,000 a year' is correct? Does snow really cost the UK economy GBP1bn per day? But how do we distinguish statistical fact from fiction? What can we do to decide whether a number, claim or news story is accurate? Without an understanding of data, we cannot truly understand what is going on in the world around us. Written by Anthony Reuben, the BBC's first head of statistics, Statistical is an accessible and empowering guide to challenging the numbers all around us.
This book offers an up-to-date, comprehensive coverage of stochastic dominance and its related concepts in a unified framework. A method for ordering probability distributions, stochastic dominance has grown in importance recently as a way to measure comparisons in welfare economics, inequality studies, health economics, insurance wages, and trade patterns. Whang pays particular attention to inferential methods and applications, citing and summarizing various empirical studies in order to relate the econometric methods with real applications and using computer codes to enable the practical implementation of these methods. Intuitive explanations throughout the book ensure that readers understand the basic technical tools of stochastic dominance. |
You may like...
The Leading Indicators - A Short History…
Zachary Karabell
Paperback
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Kwantitatiewe statistiese tegnieke
Swanepoel Swanepoel, Vivier Vivier, …
Book
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
Paperback
R2,397
Discovery Miles 23 970
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)
Business Statistics, Global Edition
Norean Sharpe, Richard De Veaux, …
Paperback
R2,297
Discovery Miles 22 970
|