![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This workbook is a companion to the textbook Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, also published by Oxford University Press. The workbook contains exercises and solutions concerned with the theory of cointegration in the vector autoregressive model. The main text has been used for courses on Cointegration, and many of the exercises have been posed as either training exercises or exam questions. Many of them are challenging and summarize results published in the literature. Each chapter starts with a brief summary of the content of the corresponding chapter in the mainses text, which introduces the notation and the most important results.
A properly structured financial model can provide decision makers with a powerful planning tool that helps them identify the consequences of their decisions before they are put into practice. Introduction to Financial Models for Management and Planning, Second Edition enables professionals and students to learn how to develop and use computer-based models for financial planning. This volume provides critical tools for the financial toolbox, then shows how to use them tools to build successful models.
Employers Can Reduce Their Employees' Health Care Costs by Thinking Out of The BoxEmployee health care costs have skyrocketed, especially for small business owners. But employers have options that medical entrepreneurs have crafted to provide all businesses with plans to improve their employees' wellness and reduce their costs. Thus, the cost of employee health care benefits can be reduced markedly by choosing one of numerous alternatives to traditional indemnity policies. The Finance of Health Care provides business decision makers with the information they need to match the optimal health care plan with the culture of their workforce. This book is a must guide for corporate executives and entrepreneurs who want to attract-and keep--the best employees in our competitive economy.
This edition sets out recent developments in East Asian local currency bond markets and discusses the region's economic outlook, the risk of another taper tantrum, and price differences between labeled and unlabeled green bonds. Emerging East Asia's local currency (LCY) bond markets expanded to an aggregate USD21.7 trillion at the end of September 2021, posting growth of 3.4% quarter-on-quarter, up from 2.9% in the previous quarter. LCY bond issuance rose 6.8% quarter-on-quarter to USD2.4 trillion in Q3 2021. Sustainable bond markets in ASEAN+3 also continued to expand to reach a size of USD388.7 billion at the end of September.
The important data of economics are in the form of time series; therefore, the statistical methods used will have to be those designed for time series data. New methods for analyzing series containing no trends have been developed by communication engineering, and much recent research has been devoted to adapting and extending these methods so that they will be suitable for use with economic series. This book presents the important results of this research and further advances the application of the recently developed Theory of Spectra to economics. In particular, Professor Hatanaka demonstrates the new technique in treating two problems-business cycle indicators, and the acceleration principle existing in department store data. Originally published in 1964. The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.
In recent years, interest in rigorous impact evaluation has grown tremendously in policy-making, economics, public health, social sciences and international relations. Evidence-based policy-making has become a recurring theme in public policy, alongside greater demands for accountability in public policies and public spending, and requests for independent and rigorous impact evaluations for policy evidence. Frölich and Sperlich offer a comprehensive and up-to-date approach to quantitative impact evaluation analysis, also known as causal inference or treatment effect analysis, illustrating the main approaches for identification and estimation: experimental studies, randomization inference and randomized control trials (RCTs), matching and propensity score matching and weighting, instrumental variable estimation, difference-in-differences, regression discontinuity designs, quantile treatment effects, and evaluation of dynamic treatments. The book is designed for economics graduate courses but can also serve as a manual for professionals in research institutes, governments, and international organizations, evaluating the impact of a wide range of public policies in health, environment, transport and economic development.
Meaningful use of advanced Bayesian methods requires a good understanding of the fundamentals. This engaging book explains the ideas that underpin the construction and analysis of Bayesian models, with particular focus on computational methods and schemes. The unique features of the text are the extensive discussion of available software packages combined with a brief but complete and mathematically rigorous introduction to Bayesian inference. The text introduces Monte Carlo methods, Markov chain Monte Carlo methods, and Bayesian software, with additional material on model validation and comparison, transdimensional MCMC, and conditionally Gaussian models. The inclusion of problems makes the book suitable as a textbook for a first graduate-level course in Bayesian computation with a focus on Monte Carlo methods. The extensive discussion of Bayesian software - R/R-INLA, OpenBUGS, JAGS, STAN, and BayesX - makes it useful also for researchers and graduate students from beyond statistics.
Probability and Bayesian Modeling is an introduction to probability and Bayesian thinking for undergraduate students with a calculus background. The first part of the book provides a broad view of probability including foundations, conditional probability, discrete and continuous distributions, and joint distributions. Statistical inference is presented completely from a Bayesian perspective. The text introduces inference and prediction for a single proportion and a single mean from Normal sampling. After fundamentals of Markov Chain Monte Carlo algorithms are introduced, Bayesian inference is described for hierarchical and regression models including logistic regression. The book presents several case studies motivated by some historical Bayesian studies and the authors' research. This text reflects modern Bayesian statistical practice. Simulation is introduced in all the probability chapters and extensively used in the Bayesian material to simulate from the posterior and predictive distributions. One chapter describes the basic tenets of Metropolis and Gibbs sampling algorithms; however several chapters introduce the fundamentals of Bayesian inference for conjugate priors to deepen understanding. Strategies for constructing prior distributions are described in situations when one has substantial prior information and for cases where one has weak prior knowledge. One chapter introduces hierarchical Bayesian modeling as a practical way of combining data from different groups. There is an extensive discussion of Bayesian regression models including the construction of informative priors, inference about functions of the parameters of interest, prediction, and model selection. The text uses JAGS (Just Another Gibbs Sampler) as a general-purpose computational method for simulating from posterior distributions for a variety of Bayesian models. An R package ProbBayes is available containing all of the book datasets and special functions for illustrating concepts from the book. A complete solutions manual is available for instructors who adopt the book in the Additional Resources section.
The book provides an engaging account of theoretical, empirical, and practical aspects of various statistical methods in measuring risks of financial institutions, especially banks. In this book, the author demonstrates how banks can apply many simple but effective statistical techniques to analyze risks they face in business and safeguard themselves from potential vulnerability. It covers three primary areas of banking; risks-credit, market, and operational risk and in a uniquely intuitive, step-by-step manner the author provides hands-on details on the primary statistical tools that can be applied for financial risk measurement and management. The book lucidly introduces concepts of various well-known statistical methods such as correlations, regression, matrix approach, probability and distribution theorem, hypothesis testing, value at risk, and Monte Carlo simulation techniques and provides a hands-on estimation and interpretation of these tests in measuring risks of the financial institutions. The book strikes a fine balance between concepts and mathematics to tell a rich story of thoughtful use of statistical methods.
The growth of machines and users of the Internet has led to the proliferation of all sorts of data concerning individuals, institutions, companies, governments, universities, and all kinds of known objects and events happening everywhere in daily life. Scientific knowledge is not an exception to the data boom. The phenomenon of data growth in science pushes forth as the number of scientific papers published doubles every 9-15 years, and the need for methods and tools to understand what is reported in scientific literature becomes evident. As the number of academicians and innovators swells, so do the number of publications of all types, yielding outlets of documents and depots of authors and institutions that need to be found in Bibliometric databases. These databases are dug into and treated to hand over metrics of research performance by means of Scientometrics that analyze the toil of individuals, institutions, journals, countries, and even regions of the world. The objective of this book is to assist students, professors, university managers, government, industry, and stakeholders in general, understand which are the main Bibliometric databases, what are the key research indicators, and who are the main players in university rankings and the methodologies and approaches that they employ in producing ranking tables. The book is divided into two sections. The first looks at Scientometric databases, including Scopus and Google Scholar as well as institutional repositories. The second section examines the application of Scientometrics to world-class universities and the role that Scientometrics can play in competition among them. It looks at university rankings and the methodologies used to create these rankings. Individual chapters examine specific rankings that include: QS World University Scimago Institutions Webometrics U-Multirank U.S. News & World Report The book concludes with a discussion of university performance in the age of research analytics.
The majority of empirical research in economics ignores the potential benefits of nonparametric methods, while the majority of advances in nonparametric theory ignores the problems faced in applied econometrics. This book helps bridge this gap between applied economists and theoretical nonparametric econometricians. It discusses in depth, and in terms that someone with only one year of graduate econometrics can understand, basic to advanced nonparametric methods. The analysis starts with density estimation and motivates the procedures through methods that should be familiar to the reader. It then moves on to kernel regression, estimation with discrete data, and advanced methods such as estimation with panel data and instrumental variables models. The book pays close attention to the issues that arise with programming, computing speed, and application. In each chapter, the methods discussed are applied to actual data, paying attention to presentation of results and potential pitfalls.
Intended primarily to prepare first-year graduate students for their ongoing work in econometrics, economic theory, and finance, this innovative book presents the fundamental concepts of theoretical econometrics, from measure-theoretic probability to statistics. A. Ronald Gallant covers these topics at an introductory level and develops the ideas to the point where they can be applied. He thereby provides the reader not only with a basic grasp of the key empirical tools but with sound intuition as well. In addition to covering the basic tools of empirical work in economics and finance, Gallant devotes particular attention to motivating ideas and presenting them as the solution to practical problems. For example, he presents correlation, regression, and conditional expectation as a means of obtaining the best approximation of one random variable by some function of another. He considers linear, polynomial, and unrestricted functions, and leads the reader to the notion of conditioning on a sigma-algebra as a means for finding the unrestricted solution. The reader thus gains an understanding of the relationships among linear, polynomial, and unrestricted solutions. Proofs of results are presented when the proof itself aids understanding or when the proof technique has practical value. A major text-treatise by one of the leading scholars in this field," An Introduction to Econometric Theory" will prove valuable not only to graduate students but also to all economists, statisticians, and finance professionals interested in the ideas and implications of theoretical econometrics.
County and City Extra: Special Decennial Census Edition is an essential single-volume source for Census 2020 information. This edition contains easy-to-read geographic summaries of the United States population by race, Hispanic origin, and housing status. It provides the most up-to-date census data for each state, county, metropolitan area, congressional district, and all cities with a population of 25,000 or more. It complements the popular and trusted County and City Extra: Annual Metro, City, and County Data Book, also published by Bernan Press. Features of this publication include: Census data on all states, counties, metropolitan areas, and congressional districts, as well as on cities and towns with populations above 25,000 Key data on over 5,000 geographic areas Ranking tables which present each geography type by various subjects Data from previous censuses for comparative purposes Color maps that help the user understand the data
The State and Metropolitan Area Data Book is the continuation of the U.S. Census Bureau's discontinued publication. It is a convenient summary of statistics on the social and economic structure of the states, metropolitan areas, and micropolitan areas in the United States. It is designed to serve as a statistical reference and guide to other data publications and sources. This new edition features more than 1,500 data items from a variety of sources. It covers many key topical areas including population, birth and death rates, health coverage, school enrollment, crime rates, income and housing, employment, transportation, and government. The metropolitan area information is based on the latest set of definitions of metropolitan and micropolitan areas including: a complete listing and data for all states, metropolitan areas, including micropolitan areas, and their component counties 2010 census counts and more recent population estimates for all areas results of the 2016 national and state elections expanded vital statistics, communication, and criminal justice data data on migration and commuting habits American Community Survey 1- and 3-year estimates data on health insurance and housing and finance matters accurate and helpful citations to allow the user to directly consult the source source notes and explanations A guide to state statistical abstracts and state information Economic development officials, regional planners, urban researchers, college students, and data users can easily see the trends and changes affecting the nation today.
Random set theory is a fascinating branch of mathematics that amalgamates techniques from topology, convex geometry, and probability theory. Social scientists routinely conduct empirical work with data and modelling assumptions that reveal a set to which the parameter of interest belongs, but not its exact value. Random set theory provides a coherent mathematical framework to conduct identification analysis and statistical inference in this setting and has become a fundamental tool in econometrics and finance. This is the first book dedicated to the use of the theory in econometrics, written to be accessible for readers without a background in pure mathematics. Molchanov and Molinari define the basics of the theory and illustrate the mathematical concepts by their application in the analysis of econometric models. The book includes sets of exercises to accompany each chapter as well as examples to help readers apply the theory effectively.
The papers in this volume analyze the deployment of Big Data to solve both existing and novel challenges in economic measurement. The existing infrastructure for the production of key economic statistics relies heavily on data collected through sample surveys and periodic censuses, together with administrative records generated in connection with tax administration. The increasing difficulty of obtaining survey and census responses threatens the viability of existing data collection approaches. The growing availability of new sources of Big Data-such as scanner data on purchases, credit card transaction records, payroll information, and prices of various goods scraped from the websites of online sellers-has changed the data landscape. These new sources of data hold the promise of allowing the statistical agencies to produce more accurate, more disaggregated, and more timely economic data to meet the needs of policymakers and other data users. This volume documents progress made toward that goal and the challenges to be overcome to realize the full potential of Big Data in the production of economic statistics. It describes the deployment of Big Data to solve both existing and novel challenges in economic measurement, and it will be of interest to statistical agency staff, academic researchers, and serious users of economic statistics.
This book integrates the fundamentals of asymptotic theory of statistical inference for time series under nonstandard settings, e.g., infinite variance processes, not only from the point of view of efficiency but also from that of robustness and optimality by minimizing prediction error. This is the first book to consider the generalized empirical likelihood applied to time series models in frequency domain and also the estimation motivated by minimizing quantile prediction error without assumption of true model. It provides the reader with a new horizon for understanding the prediction problem that occurs in time series modeling and a contemporary approach of hypothesis testing by the generalized empirical likelihood method. Nonparametric aspects of the methods proposed in this book also satisfactorily address economic and financial problems without imposing redundantly strong restrictions on the model, which has been true until now. Dealing with infinite variance processes makes analysis of economic and financial data more accurate under the existing results from the demonstrative research. The scope of applications, however, is expected to apply to much broader academic fields. The methods are also sufficiently flexible in that they represent an advanced and unified development of prediction form including multiple-point extrapolation, interpolation, and other incomplete past forecastings. Consequently, they lead readers to a good combination of efficient and robust estimate and test, and discriminate pivotal quantities contained in realistic time series models.
Das Arbeitsbuch stellt eine Aufgabensammlung mit detaillierten Loesungen zur Einfuhrung in die Angewandte Statistik fur Studenten zur Verfugung. Die Aufgaben umfassen dabei die Themengebiete, welche in etwa in drei Semestern Statistikausbildung gelehrt werden. Damit ist das Arbeitsbuch insbesondere fur Studierende der Wirtschaftswissenschaften, Humanmedizin, Psychologie, Ingenieurswissenschaften sowie Informatik von Interesse. Insgesamt wird durch interessante, teilweise reale und teilweise fiktive Sachverhalte das Lernen des ansonsten eher trockenen vermittelten Stoffes erleichtert. Praktische Aufgaben, die mithilfe der Statistiksoftware R geloest werden mussen, sind besonders gekennzeichnet. Am Ende des Buches gibt es Aufgaben zu gemischten Themengebieten zur Klausurvorbereitung.
Dieses essential erklart das grundlegende Prinzip statistischer Testverfahren. Dabei stehen die Bedeutung der statistischen Signifikanz sowie des p-Wertes im Fokus. Haufig anzutreffende Fehlinterpretationen werden angesprochen. Dadurch wird ersichtlich, was ein signifikantes Ergebnis aussagt und, was es nicht aussagt. Der Leser wird somit befahigt, adaquat mit Testergebnissen umzugehen.
|
You may like...
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)R751 Discovery Miles 7 510
Essays in Honor of M. Hashem Pesaran…
Alexander Chudik, Cheng Hsiao, …
Hardcover
R3,574
Discovery Miles 35 740
The Leading Indicators - A Short History…
Zachary Karabell
Paperback
|