![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This book allows those with a basic knowledge of econometrics to learn the main nonparametric and semiparametric techniques used in econometric modelling, and how to apply them correctly. It looks at kernel density estimation, kernel regression, splines, wavelets, and mixture models, and provides useful empirical examples throughout. Using empirical application, several economic topics are addressed, including income distribution, wage equation, economic convergence, the Phillips curve, interest rate dynamics, returns volatility, and housing prices. A helpful appendix also explains how to implement the methods using R. This useful book will appeal to practitioners and researchers who need an accessible introduction to nonparametric and semiparametric econometrics. The practical approach provides an overview of the main techniques without including too much focus on mathematical formulas. It also serves as an accompanying textbook for a basic course, typically at undergraduate or graduate level.
Originally published in 1985. Mathematical methods and models to facilitate the understanding of the processes of economic dynamics and prediction were refined considerably over the period before this book was written. The field had grown; and many of the techniques involved became extremely complicated. Areas of particular interest include optimal control, non-linear models, game-theoretic approaches, demand analysis and time-series forecasting. This book presents a critical appraisal of developments and identifies potentially productive new directions for research. It synthesises work from mathematics, statistics and economics and includes a thorough analysis of the relationship between system understanding and predictability.
Bootstrapping is a conceptually simple statistical technique to increase the quality of estimates, conduct robustness checks and compute standard errors for virtually any statistic. This book provides an intelligible and compact introduction for students, scientists and practitioners. It not only gives a clear explanation of the underlying concepts but also demonstrates the application of bootstrapping using Python and Stata.
Plenty of literature review and applications of various tests provided to cover all the aspects of research methodology Various examination questions have been provided Strong Pedagogy along with regular features such as Concept Checks, Text Overviews, Key Terms, Review Questions, Exercises and References Though the book is primarily addressed to students,it will be equally useful to Researchers and Entrepreneurs More than other research textbooks, this book addresses the students' need to comprehend all aspects of the research process which includes Research process, clarification of the research problem, Ethical issues, Survey research, Research report preparation and presentation.
A classic text for accuracy and statistical precision. Statistics for Business and Economics enables readers to conduct serious analysis of applied problems rather than running simple "canned" applications. This text is also at a mathematically higher level than most business statistics texts and provides readers with the knowledge they need to become stronger analysts for future managerial positions. The eighth edition of this book has been revised and updated to provide readers with improved problem contexts for learning how statistical methods can improve their analysis and understanding of business and economics.
Through use of practical examples and a plainspoken narrative style that minimises the use of maths, this book demystifies data concepts, sources, and methods for public service professionals interested in understanding economic and social issues at the regional level. By blending elements of a general interest book, a textbook, and a reference book, it equips civic leaders, public administrators, urban planners, nonprofit executives, philanthropists, journalists, and graduate students in various public affairs disciplines to wield social and economic data for the benefit of their communities. While numerous books about quantitative research exist, few focus specifically on the public sector. Running the Numbers, in contrast, explores a wide array of topics of regional importance, including economic output, demographics, business structure, labour markets, and income, among many others. To that end, the book stresses practical applications, minimises the use of maths, and employs extended, chapter-length examples that demonstrate how analytical tools can illuminate the social and economic workings of actual American regions.
Bjørn Lomborg, a former member of Greenpeace, challenges widely held beliefs that the world environmental situation is getting worse and worse in his new book, The Skeptical Environmentalist. Using statistical information from internationally recognized research institutes, Lomborg systematically examines a range of major environmental issues that feature prominently in headline news around the world, including pollution, biodiversity, fear of chemicals, and the greenhouse effect, and documents that the world has actually improved. He supports his arguments with over 2500 footnotes, allowing readers to check his sources. Lomborg criticizes the way many environmental organizations make selective and misleading use of scientific evidence and argues that we are making decisions about the use of our limited resources based on inaccurate or incomplete information. Concluding that there are more reasons for optimism than pessimism, he stresses the need for clear-headed prioritization of resources to tackle real, not imagined, problems. The Skeptical Environmentalist offers readers a non-partisan evaluation that serves as a useful corrective to the more alarmist accounts favored by campaign groups and the media. Bjørn Lomborg is an associate professor of statistics in the Department of Political Science at the University of Aarhus. When he started to investigate the statistics behind the current gloomy view of the environment, he was genuinely surprised. He published four lengthy articles in the leading Danish newspaper, including statistics documenting an ever-improving world, and unleashed the biggest post-war debate with more than 400 articles in all the major papers. Since then, Lomborg has been a frequent participant in the European debate on environmentalism on television, radio, and in newspapers.
This new edition updates Durbin & Koopman's important text on the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as made up of distinct components such as trend, seasonal, regression elements and disturbance terms, each of which is modelled separately. The techniques that emerge from this approach are very flexible and are capable of handling a much wider range of problems than the main analytical system currently in use for time series analysis, the Box-Jenkins ARIMA system. Additions to this second edition include the filtering of nonlinear and non-Gaussian series. Part I of the book obtains the mean and variance of the state, of a variable intended to measure the effect of an interaction and of regression coefficients, in terms of the observations. Part II extends the treatment to nonlinear and non-normal models. For these, analytical solutions are not available so methods are based on simulation.
Contains information for using R software with the examples in the textbook Sampling: Design and Analysis, 3rd edition by Sharon L. Lohr.
Virtually any random process developing chronologically can be viewed as a time series. In economics closing prices of stocks, the cost of money, the jobless rate, and retail sales are just a few examples of many. Developed from course notes and extensively classroom-tested, Applied Time Series Analysis with R, Second Edition includes examples across a variety of fields, develops theory, and provides an R-based software package to aid in addressing time series problems in a broad spectrum of fields. The material is organized in an optimal format for graduate students in statistics as well as in the natural and social sciences to learn to use and understand the tools of applied time series analysis. Features Gives readers the ability to actually solve significant real-world problems Addresses many types of nonstationary time series and cutting-edge methodologies Promotes understanding of the data and associated models rather than viewing it as the output of a "black box" Provides the R package tswge available on CRAN which contains functions and over 100 real and simulated data sets to accompany the book. Extensive help regarding the use of tswge functions is provided in appendices and on an associated website. Over 150 exercises and extensive support for instructors The second edition includes additional real-data examples, uses R-based code that helps students easily analyze data, generate realizations from models, and explore the associated characteristics. It also adds discussion of new advances in the analysis of long memory data and data with time-varying frequencies (TVF).
Bernan Press proudly presents the 15th edition of Employment, Hours, and Earnings: States and Areas, 2020. A special addition to Bernan Press Handbook of U.S. Labor Statistics: Employment, Earnings, Prices, Productivity, and Other Labor Data, this reference is a consolidated wealth of employment information, providing monthly and annual data on hours worked and earnings made by industry, including figures and summary information spanning several years. These data are presented for states and metropolitan statistical areas. This edition features: Nearly 300 tables with data on employment for each state, the District of Columbia, and the nation's seventy-five largest metropolitan statistical areas (MSAs) Detailed, non-seasonally adjusted, industry data organized by month and year Hours and earnings data for each state, by industry An introduction for each state and the District of Columbia that denotes salient data and noteworthy trends, including changes in population and the civilian labor force, industry increases and declines, employment and unemployment statistics, and a chart detailing employment percentages, by industry Ranking of the seventy-five largest MSAs, including census population estimates, unemployment rates, and the percent change in total nonfarm employment, Concise technical notes that explain pertinent facts about the data, including sources, definitions, and significant changes; and provides references for further guidance A comprehensive appendix that details the geographical components of the seventy-five largest MSAs The employment, hours, and earnings data in this publication provide a detailed and timely picture of the fifty states, the District of Columbia, and the nation's seventy-five largest MSAs. These data can be used to analyze key factors affecting state and local economies and to compare national cyclical trends to local-level economic activity. This reference is an excellent source of information for analysts in both the public and private sectors. Readers who are involved in public policy can use these data to determine the health of the economy, to clearly identify which sectors are growing and which are declining, and to determine the need for federal assistance. State and local jurisdictions can use the data to determine the need for services, including training and unemployment assistance, and for planning and budgetary purposes. In addition, the data can be used to forecast tax revenue. In private industry, the data can be used by business owners to compare their business to the economy as a whole; and to identify suitable areas when making decisions about plant locations, wholesale and retail trade outlets, and for locating a particular sector base.
Introduction to statistics with SPSS does not require any prior knowledge of statistics. The book can be rewardingly used in, after or parallel to a course on statistics. A wide range of terms and techniques is covered, including those involved in simple and multiple regression analyses. After studying this book, the student will be able to enter data from a simple research project into a computer, provide an adequate analysis of these data and present a report on the subject.
The first book for a popular audience on the transformative, democratising technology of 'DeFi'. After over a decade of Bitcoin, which has now moved beyond lore and hype into an increasingly robust star in the firmament of global assets, a new and more important question has arisen. What happens beyond Bitcoin? The answer is decentralised finance - 'DeFi'. Tech and finance experts Steven Boykey Sidley and Simon Dingle argue that DeFi - which enables all manner of financial transactions to take place directly, person to person, without the involvement of financial institutions - will redesign the cogs and wheels in the engines of trust, and make the remarkable rise of Bitcoin look quaint by comparison. It will disrupt and displace fine and respectable companies, if not entire industries. Sidley and Dingle explain how DeFi works, introduce the organisations and individuals that comprise the new industry, and identify the likely winners and losers in the coming revolution.
This book presents strategies for analyzing qualitative and mixed methods data with MAXQDA software, and provides guidance on implementing a variety of research methods and approaches, e.g. grounded theory, discourse analysis and qualitative content analysis, using the software. In addition, it explains specific topics, such as transcription, building a coding frame, visualization, analysis of videos, concept maps, group comparisons and the creation of literature reviews. The book is intended for masters and PhD students as well as researchers and practitioners dealing with qualitative data in various disciplines, including the educational and social sciences, psychology, public health, business or economics.
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
This well-balanced introduction to enterprise risk management integrates quantitative and qualitative approaches and motivates key mathematical and statistical methods with abundant real-world cases - both successes and failures. Worked examples and end-of-chapter exercises support readers in consolidating what they learn. The mathematical level, which is suitable for graduate and senior undergraduate students in quantitative programs, is pitched to give readers a solid understanding of the concepts and principles involved, without diving too deeply into more complex theory. To reveal the connections between different topics, and their relevance to the real world, the presentation has a coherent narrative flow, from risk governance, through risk identification, risk modelling, and risk mitigation, capped off with holistic topics - regulation, behavioural biases, and crisis management - that influence the whole structure of ERM. The result is a text and reference that is ideal for graduate and senior undergraduate students, risk managers in industry, and anyone preparing for ERM actuarial exams.
Collecting and analyzing data on unemployment, inflation, and inequality help describe the complex world around us. When published by the government, such data are called official statistics. They are reported by the media, used by politicians to lend weight to their arguments, and by economic commentators to opine about the state of society. Despite such widescale use, explanations about how these measures are constructed are seldom provided for a non-technical reader. This Measuring Society book is a short, accessible guide to six topics: jobs, house prices, inequality, prices for goods and services, poverty, and deprivation. Each relates to concepts we use on a personal level to form an understanding of the society in which we live: We need a job, a place to live, and food to eat. Using data from the United States, we answer three basic questions: why, how, and for whom these statistics have been constructed. We add some context and flavor by discussing the historical background. This book provides the reader with a good grasp of these measures. Chaitra H. Nagaraja is an Associate Professor of Statistics at the Gabelli School of Business at Fordham University in New York. Her research interests include house price indices and inequality measurement. Prior to Fordham, Dr. Nagaraja was a researcher at the U.S. Census Bureau. While there, she worked on projects relating to the American Community Survey.
Hands-on Machine Learning with R provides a practical and applied approach to learning and developing intuition into today's most popular machine learning methods. This book serves as a practitioner's guide to the machine learning process and is meant to help the reader learn to apply the machine learning stack within R, which includes using various R packages such as glmnet, h2o, ranger, xgboost, keras, and others to effectively model and gain insight from their data. The book favors a hands-on approach, providing an intuitive understanding of machine learning concepts through concrete examples and just a little bit of theory. Throughout this book, the reader will be exposed to the entire machine learning process including feature engineering, resampling, hyperparameter tuning, model evaluation, and interpretation. The reader will be exposed to powerful algorithms such as regularized regression, random forests, gradient boosting machines, deep learning, generalized low rank models, and more! By favoring a hands-on approach and using real word data, the reader will gain an intuitive understanding of the architectures and engines that drive these algorithms and packages, understand when and how to tune the various hyperparameters, and be able to interpret model results. By the end of this book, the reader should have a firm grasp of R's machine learning stack and be able to implement a systematic approach for producing high quality modeling results. Features: * Offers a practical and applied introduction to the most popular machine learning methods. * Topics covered include feature engineering, resampling, deep learning and more. * Uses a hands-on approach and real world data.
This pioneering work gives an insight into the daily work of the national statistical institutions of the old command economies in their endeavour to meet the challenge of transition to a market-oriented system of labour statistics variables and indicators. Distinct from any other publication with statistics on Central and East European countries and the former Soviet Union, it reveals why and how new statistics are being collected and what still has to be done in order to make their national data compatible with the rest of the world. The authors discuss the problems involved in the measurement of employment (in both the state and the private sectors) and unemployment, the collection of reliable wage statistics, and the development of new economic classifications in line with those internationally recognized and adopted. They also make a number of recommendations on how to adapt ILO international standards in order to meet the above needs.
This must-have manual provides detailed solutions to all of the 300 exercises in Dickson, Hardy and Waters' Actuarial Mathematics for Life Contingent Risks, 3 edition. This groundbreaking text on the modern mathematics of life insurance is required reading for the Society of Actuaries' (SOA) LTAM Exam. The new edition treats a wide range of newer insurance contracts such as critical illness and long-term care insurance; pension valuation material has been expanded; and two new chapters have been added on developing models from mortality data and on changing mortality. Beyond professional examinations, the textbook and solutions manual offer readers the opportunity to develop insight and understanding through guided hands-on work, and also offer practical advice for solving problems using straightforward, intuitive numerical methods. Companion Excel spreadsheets illustrating these techniques are available for free download.
This textbook provides future data analysts with the tools, methods, and skills needed to answer data-focused, real-life questions; to carry out data analysis; and to visualize and interpret results to support better decisions in business, economics, and public policy. Data wrangling and exploration, regression analysis, machine learning, and causal analysis are comprehensively covered, as well as when, why, and how the methods work, and how they relate to each other. As the most effective way to communicate data analysis, running case studies play a central role in this textbook. Each case starts with an industry-relevant question and answers it by using real-world data and applying the tools and methods covered in the textbook. Learning is then consolidated by 360 practice questions and 120 data exercises. Extensive online resources, including raw and cleaned data and codes for all analysis in Stata, R, and Python, can be found at www.gabors-data-analysis.com.
First published in 1995. In the current, increasingly global economy, investors require quick access to a wide range of financial and investment-related statistics to assist them in better understanding the macroeconomic environment in which their investments will operate. The International Financial Statistics Locator eliminates the need to search though a number of sources to identify those that contain much of this statistical information. It is intended for use by librarians, students, individual investors, and the business community and provides access to twenty-two resources, print and electronic, that contain current and historical financial and economic statistics investors need to appreciate and profit from evolving and established international markets.
Through use of practical examples and a plainspoken narrative style that minimises the use of maths, this book demystifies data concepts, sources, and methods for public service professionals interested in understanding economic and social issues at the regional level. By blending elements of a general interest book, a textbook, and a reference book, it equips civic leaders, public administrators, urban planners, nonprofit executives, philanthropists, journalists, and graduate students in various public affairs disciplines to wield social and economic data for the benefit of their communities. While numerous books about quantitative research exist, few focus specifically on the public sector. Running the Numbers, in contrast, explores a wide array of topics of regional importance, including economic output, demographics, business structure, labour markets, and income, among many others. To that end, the book stresses practical applications, minimises the use of maths, and employs extended, chapter-length examples that demonstrate how analytical tools can illuminate the social and economic workings of actual American regions. |
You may like...
|