![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
In order to obtain many of the classical results in the theory of statistical estimation, it is usual to impose regularity conditions on the distributions under consideration. In small sample and large sample theories of estimation there are well established sets of regularity conditions, and it is worth while to examine what may follow if any one of these regularity conditions fail to hold. "Non-regular estimation" literally means the theory of statistical estimation when some or other of the regularity conditions fail to hold. In this monograph, the authors present a systematic study of the meaning and implications of regularity conditions, and show how the relaxation of such conditions can often lead to surprising conclusions. Their emphasis is on considering small sample results and to show how pathological examples may be considered in this broader framework.
A comprehensive and up-to-date introduction to the mathematics that all economics students need to know Probability theory is the quantitative language used to handle uncertainty and is the foundation of modern statistics. Probability and Statistics for Economists provides graduate and PhD students with an essential introduction to mathematical probability and statistical theory, which are the basis of the methods used in econometrics. This incisive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of the mathematics that every economist needs to know. Covers probability and statistics with mathematical rigor while emphasizing intuitive explanations that are accessible to economics students of all backgrounds Discusses random variables, parametric and multivariate distributions, sampling, the law of large numbers, central limit theory, maximum likelihood estimation, numerical optimization, hypothesis testing, and more Features hundreds of exercises that enable students to learn by doing Includes an in-depth appendix summarizing important mathematical results as well as a wealth of real-world examples Can serve as a core textbook for a first-semester PhD course in econometrics and as a companion book to Bruce E. Hansen's Econometrics Also an invaluable reference for researchers and practitioners
This new edition updates Durbin & Koopman's important text on the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as made up of distinct components such as trend, seasonal, regression elements and disturbance terms, each of which is modelled separately. The techniques that emerge from this approach are very flexible and are capable of handling a much wider range of problems than the main analytical system currently in use for time series analysis, the Box-Jenkins ARIMA system. Additions to this second edition include the filtering of nonlinear and non-Gaussian series. Part I of the book obtains the mean and variance of the state, of a variable intended to measure the effect of an interaction and of regression coefficients, in terms of the observations. Part II extends the treatment to nonlinear and non-normal models. For these, analytical solutions are not available so methods are based on simulation.
This book deals with Business Analytics (BA) - an emerging area in modern business decision making. Business analytics is a data driven decision making approach that uses statistical and quantitative analysis along with data mining, management science, and fact-based data to measure past business performance to guide an organization in business planning and effective decision making. Business Analytics tools are also used to predict future business outcomes with the help of forecasting and predictive modeling. In this age of technology, massive amount of data are collected by companies. Successful companies use their data as an asset and use them for competitive advantage. Business Analytics is helping businesses in making informed business decisions and automating and optimizing business processes. Successful business analytics depends on the quality of data. Skilled analysts, who understand the technologies and their business, use business analytics tools as an organizational commitment to data-driven decision making.
This is an excerpt from the 4-volume dictionary of economics, a reference book which aims to define the subject of economics today. 1300 subject entries in the complete work cover the broad themes of economic theory. This extract concentrates on time series and statistics.
A new procedure for the maximum-likelihood estimation of dynamic econometric models with errors in both endogenous and exogenous variables is presented in this monograph. A complete analytical development of the expressions used in problems of estimation and verification of models in state-space form is presented. The results are useful in relation not only to the problem of errors in variables but also to any other possible econometric application of state-space formulations.
In each chapter of this volume some specific topics in the econometric analysis of time series data are studied. All topics have in common the statistical inference in linear models with correlated disturbances. The main aim of the study is to give a survey of new and old estimation techniques for regression models with disturbances that follow an autoregressive-moving average process. In the final chapter also several test strategies for discriminating between various types of autocorrelation are discussed. In nearly all chapters it is demonstrated how useful the simple geometric interpretation of the well-known ordinary least squares (OLS) method is. By applying these geometric concepts to linear spaces spanned by scalar stochastic variables, it emerges that well-known as well as new results can be derived in a simple geometric manner, sometimes without the limiting restrictions of the usual derivations, e. g. , the conditional normal distribution, the Kalman filter equations and the Cramer-Rao inequality. The outline of the book is as follows. In Chapter 2 attention is paid to a generalization of the well-known first order autocorrelation transformation of a linear regression model with disturbances that follow a first order Markov scheme. Firstly, the appropriate lower triangular transformation matrix is derived for the case that the disturbances follow a moving average process of order q (MA(q". It turns out that the calculations can be carried out either analytically or in a recursive manner.
In many branches of science relevant observations are taken sequentially over time. Bayesian Analysis of Time Series discusses how to use models that explain the probabilistic characteristics of these time series and then utilizes the Bayesian approach to make inferences about their parameters. This is done by taking the prior information and via Bayes theorem implementing Bayesian inferences of estimation, testing hypotheses, and prediction. The methods are demonstrated using both R and WinBUGS. The R package is primarily used to generate observations from a given time series model, while the WinBUGS packages allows one to perform a posterior analysis that provides a way to determine the characteristic of the posterior distribution of the unknown parameters. Features Presents a comprehensive introduction to the Bayesian analysis of time series. Gives many examples over a wide variety of fields including biology, agriculture, business, economics, sociology, and astronomy. Contains numerous exercises at the end of each chapter many of which use R and WinBUGS. Can be used in graduate courses in statistics and biostatistics, but is also appropriate for researchers, practitioners and consulting statisticians. About the author Lyle D. Broemeling, Ph.D., is Director of Broemeling and Associates Inc., and is a consulting biostatistician. He has been involved with academic health science centers for about 20 years and has taught and been a consultant at the University of Texas Medical Branch in Galveston, The University of Texas MD Anderson Cancer Center and the University of Texas School of Public Health. His main interest is in developing Bayesian methods for use in medical and biological problems and in authoring textbooks in statistics. His previous books for Chapman & Hall/CRC include Bayesian Biostatistics and Diagnostic Medicine, and Bayesian Methods for Agreement.
Hands-on Machine Learning with R provides a practical and applied approach to learning and developing intuition into today's most popular machine learning methods. This book serves as a practitioner's guide to the machine learning process and is meant to help the reader learn to apply the machine learning stack within R, which includes using various R packages such as glmnet, h2o, ranger, xgboost, keras, and others to effectively model and gain insight from their data. The book favors a hands-on approach, providing an intuitive understanding of machine learning concepts through concrete examples and just a little bit of theory. Throughout this book, the reader will be exposed to the entire machine learning process including feature engineering, resampling, hyperparameter tuning, model evaluation, and interpretation. The reader will be exposed to powerful algorithms such as regularized regression, random forests, gradient boosting machines, deep learning, generalized low rank models, and more! By favoring a hands-on approach and using real word data, the reader will gain an intuitive understanding of the architectures and engines that drive these algorithms and packages, understand when and how to tune the various hyperparameters, and be able to interpret model results. By the end of this book, the reader should have a firm grasp of R's machine learning stack and be able to implement a systematic approach for producing high quality modeling results. Features: * Offers a practical and applied introduction to the most popular machine learning methods. * Topics covered include feature engineering, resampling, deep learning and more. * Uses a hands-on approach and real world data.
This is a collection of papers by leading theorist Robert A Pollak - four of them previously unpublished - exploring the theory of the cost of living index. The unifying theme of these papers is that, when suitably elaborated, the theory of the cost of living index provides principled answers to many of the practical problems that arise in constructing consumer price indexes. In addition to Pollak's classic paper The Theory of the Cost of Living Index, the volume includes papers on subindexes, the intertemporal cost of living index, welfare comparisons and equivalence scales, the social cost of living index, the treatment of `quality', and consumer durables in the cost of living index.
Do economics and statistics succeed in explaining human social behaviour? To answer this question. Leland Gerson Neuberg studies some pioneering controlled social experiments. Starting in the late 1960s, economists and statisticians sought to improve social policy formation with random assignment experiments such as those that provided income guarantees in the form of a negative income tax. This book explores anomalies in the conceptual basis of such experiments and in the foundations of statistics and economics more generally. Scientific inquiry always faces certain philosophical problems. Controlled experiments of human social behaviour, however, cannot avoid some methodological difficulties not evident in physical science experiments. Drawing upon several examples, the author argues that methodological anomalies prevent microeconomics and statistics from explaining human social behaviour as coherently as the physical sciences explain nature. He concludes that controlled social experiments are a frequently overrated tool for social policy improvement.
This book aims to help the reader better understand the importance of data analysis in project management. Moreover, it provides guidance by showing tools, methods, techniques and lessons learned on how to better utilize the data gathered from the projects. First and foremost, insight into the bridge between data analytics and project management aids practitioners looking for ways to maximize the practical value of data procured. The book equips organizations with the know-how necessary to adapt to a changing workplace dynamic through key lessons learned from past ventures. The book's integrated approach to investigating both fields enhances the value of research findings.
Drawing on a lifetime of distinguished work in economic research and policy-making, Andrew Kamarck details how his profession can more usefully analyze and solve economic problems by changing its basic approach to research.Kamarck contends that most economists today strive for a mathematical precision in their work that neither stems from nor leads to an accurate view of economic reality. He develops elegant critiques of key areas of economic analysis based on appreciation of scientific method and knowledge of the limitations of economic data. Concepts such as employment, market, and money supply must be seen as loose, not exact. Measurement of national income becomes highly problematic when raking into account such factors as the "underground economy" and currency differences. World trade analysis is based on inconsistent and often inaccurate measurements. Subtle realities of the individual, social, and political worlds render largely ineffective both large-scale macroeconomics models and micro models of the consumer and the firm. Fashionable cost-benefit analysis must be recognized as inherently imprecise. Capital and investment in developing countries tend to be measured in easy but irrelevant ways.Kamarck concludes with a call for economists to involve themselves in data collection, to insist on more accurate and reliable data sources, to do analysis within the context of experience, and to take a realistic, incremental approach to policy-making. Kamarck's concerns are shared by many economists, and his eloquent presentation will be essential reading for his colleagues and for those who make use of economic research.
Learn by doing with this user-friendly introduction to time series data analysis in R. This book explores the intricacies of managing and cleaning time series data of different sizes, scales and granularity, data preparation for analysis and visualization, and different approaches to classical and machine learning time series modeling and forecasting. A range of pedagogical features support students, including end-of-chapter exercises, problems, quizzes and case studies. The case studies are designed to stretch the learner, introducing larger data sets, enhanced data management skills, and R packages and functions appropriate for real-world data analysis. On top of providing commented R programs and data sets, the book's companion website offers extra case studies, lecture slides, videos and exercise solutions. Accessible to those with a basic background in statistics and probability, this is an ideal hands-on text for undergraduate and graduate students, as well as researchers in data-rich disciplines
The quantitative modeling of complex systems of interacting risks is a fairly recent development in the financial and insurance industries. Over the past decades, there has been tremendous innovation and development in the actuarial field. In addition to undertaking mortality and longevity risks in traditional life and annuity products, insurers face unprecedented financial risks since the introduction of equity-linking insurance in 1960s. As the industry moves into the new territory of managing many intertwined financial and insurance risks, non-traditional problems and challenges arise, presenting great opportunities for technology development. Today's computational power and technology make it possible for the life insurance industry to develop highly sophisticated models, which were impossible just a decade ago. Nonetheless, as more industrial practices and regulations move towards dependence on stochastic models, the demand for computational power continues to grow. While the industry continues to rely heavily on hardware innovations, trying to make brute force methods faster and more palatable, we are approaching a crossroads about how to proceed. An Introduction to Computational Risk Management of Equity-Linked Insurance provides a resource for students and entry-level professionals to understand the fundamentals of industrial modeling practice, but also to give a glimpse of software methodologies for modeling and computational efficiency. Features Provides a comprehensive and self-contained introduction to quantitative risk management of equity-linked insurance with exercises and programming samples Includes a collection of mathematical formulations of risk management problems presenting opportunities and challenges to applied mathematicians Summarizes state-of-arts computational techniques for risk management professionals Bridges the gap between the latest developments in finance and actuarial literature and the practice of risk management for investment-combined life insurance Gives a comprehensive review of both Monte Carlo simulation methods and non-simulation numerical methods Runhuan Feng is an Associate Professor of Mathematics and the Director of Actuarial Science at the University of Illinois at Urbana-Champaign. He is a Fellow of the Society of Actuaries and a Chartered Enterprise Risk Analyst. He is a Helen Corley Petit Professorial Scholar and the State Farm Companies Foundation Scholar in Actuarial Science. Runhuan received a Ph.D. degree in Actuarial Science from the University of Waterloo, Canada. Prior to joining Illinois, he held a tenure-track position at the University of Wisconsin-Milwaukee, where he was named a Research Fellow. Runhuan received numerous grants and research contracts from the Actuarial Foundation and the Society of Actuaries in the past. He has published a series of papers on top-tier actuarial and applied probability journals on stochastic analytic approaches in risk theory and quantitative risk management of equity-linked insurance. Over the recent years, he has dedicated his efforts to developing computational methods for managing market innovations in areas of investment combined insurance and retirement planning.
This book provides an introduction to R programming and a summary of financial mathematics. It is not always easy for graduate students to grasp an overview of the theory of finance in an abstract form. For newcomers to the finance industry, it is not always obvious how to apply the abstract theory to the real financial data they encounter. Introducing finance theory alongside numerical applications makes it easier to grasp the subject. Popular programming languages like C++, which are used in many financial applications are meant for general-purpose requirements. They are good for implementing large-scale distributed systems for simultaneously valuing many financial contracts, but they are not as suitable for small-scale ad-hoc analysis or exploration of financial data. The R programming language overcomes this problem. R can be used for numerical applications including statistical analysis, time series analysis, numerical methods for pricing financial contracts, etc. This book provides an overview of financial mathematics with numerous examples numerically illustrated using the R programming language.
Volume 27 of the International Symposia in Economic Theory and Econometrics series collects a range of unique and diverse chapters, each investigating different spheres of development in emerging markets with a specific focus on significant engines of growth and advancement in the Asia-Pacific economies. Looking at the most sensitive issues behind economic growth in emerging markets, and particularly their long-term prospects, the chapters included in this volume explore the newest fields of research to understand the potential of these markets better. Including chapters from leading scholars worldwide, the volume provides comprehensive coverage of the key topics in fields spanning SMEs, terrorism, manufacturing waste reduction, financial literacy, female empowerment, leadership and corporate management, and the relationship between environmental, social, governance, and firm value. For students, researchers and practitioners, this volume offers a dynamic reference resource on emerging markets across a diverse range of topics.
Das Buch behandelt die Anlage und Auswertung von Versuchen f r
stetigen normalverteilten Response, f r stetigen Response auf der
Basis von Rangdaten, f r kategorialen, insb. bin ren Response auf
der Basis loglinearer Modelle und f r kategorialen korrelierten
Response auf der Basis von Marginalmodellen und symmetrischen
Regressionsmodellen.
A fair question to ask of an advocate of subjective Bayesianism (which the author is) is "how would you model uncertainty?" In this book, the author writes about how he has done it using real problems from the past, and offers additional comments about the context in which he was working.
Der Band bietet eine allgemein verstandliche Ubersicht uber 100 Jahre Deutsche Statistische Gesellschaft (DStatG). In 17 Kapiteln schildern anerkannte Experten, wie die DStatG zur Begrundung und Fortentwicklung der deutschen Wirtschafts- und Sozialstatistik und zu methodischen Innovationen wie neuere Zeitreihen-, Preisindex- oder Stichprobenverfahren beigetragen hat. Weitere Themen sind die Rolle der DStatG bei der Zusammenfuhrung der Ost- und Weststatistik sowie die Vorbereitung und Durchfuhrung der letzen und der aktuellen Volkszahlung."
Many empirical researchers yearn for an econometric model that better explains their data. Yet these researchers rarely pursue this objective for fear of the statistical complexities involved in specifying that model. This book is intended to alleviate those anxieties by providing a practical methodology that anyone familiar with regression analysis can employ-a methodology that will yield a model that is both more informative and is a better representation of the data. This book outlines simple, practical procedures that can be used to specify a model that better explains the data. Such procedures employ the use of purely statistical techniques performed upon a publicly available data set, which allows readers to follow along at every stage of the procedure. Using the econometric software Stata (though most other statistical software packages can be used as well), this book demonstrates how to test for model misspecification and how to respecify these models in a practical way that not only enhances the inference drawn from the results, but adds a level of robustness that can increase the researcher's confidence in the output generated. By following this procedure, researchers will be led to a better, more finely tuned empirical model that yields better results.
Practical Spreadsheet Modeling Using @Risk provides a guide of how to construct applied decision analysis models in spreadsheets. The focus is on the use of Monte Carlo simulation to provide quantitative assessment of uncertainties and key risk drivers. The book presents numerous examples based on real data and relevant practical decisions in a variety of settings, including health care, transportation, finance, natural resources, technology, manufacturing, retail, and sports and entertainment. All examples involve decision problems where uncertainties make simulation modeling useful to obtain decision insights and explore alternative choices. Good spreadsheet modeling practices are highlighted. The book is suitable for graduate students or advanced undergraduates in business, public policy, health care administration, or any field amenable to simulation modeling of decision problems. The book is also useful for applied practitioners seeking to build or enhance their spreadsheet modeling skills. Features Step-by-step examples of spreadsheet modeling and risk analysis in a variety of fields Description of probabilistic methods, their theoretical foundations, and their practical application in a spreadsheet environment Extensive example models and exercises based on real data and relevant decision problems Comprehensive use of the @Risk software for simulation analysis, including a free one-year educational software license
This book allows those with a basic knowledge of econometrics to learn the main nonparametric and semiparametric techniques used in econometric modelling, and how to apply them correctly. It looks at kernel density estimation, kernel regression, splines, wavelets, and mixture models, and provides useful empirical examples throughout. Using empirical application, several economic topics are addressed, including income distribution, wage equation, economic convergence, the Phillips curve, interest rate dynamics, returns volatility, and housing prices. A helpful appendix also explains how to implement the methods using R. This useful book will appeal to practitioners and researchers who need an accessible introduction to nonparametric and semiparametric econometrics. The practical approach provides an overview of the main techniques without including too much focus on mathematical formulas. It also serves as an accompanying textbook for a basic course, typically at undergraduate or graduate level.
Originally published in 1985. Mathematical methods and models to facilitate the understanding of the processes of economic dynamics and prediction were refined considerably over the period before this book was written. The field had grown; and many of the techniques involved became extremely complicated. Areas of particular interest include optimal control, non-linear models, game-theoretic approaches, demand analysis and time-series forecasting. This book presents a critical appraisal of developments and identifies potentially productive new directions for research. It synthesises work from mathematics, statistics and economics and includes a thorough analysis of the relationship between system understanding and predictability.
Bootstrapping is a conceptually simple statistical technique to increase the quality of estimates, conduct robustness checks and compute standard errors for virtually any statistic. This book provides an intelligible and compact introduction for students, scientists and practitioners. It not only gives a clear explanation of the underlying concepts but also demonstrates the application of bootstrapping using Python and Stata. |
You may like...
Managing Technology Integration for…
Naman Sharma, Kumar Shalender
Hardcover
R5,363
Discovery Miles 53 630
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
Lipid Modification by Enzymes and…
Uwe T. Bornscheuer
Paperback
Satiation, Satiety and the Control of…
John E Blundell, France Bellisle
Hardcover
R4,393
Discovery Miles 43 930
|