![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This workbook is a companion to the textbook Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, also published by Oxford University Press. The workbook contains exercises and solutions concerned with the theory of cointegration in the vector autoregressive model. The main text has been used for courses on Cointegration, and many of the exercises have been posed as either training exercises or exam questions. Many of them are challenging and summarize results published in the literature. Each chapter starts with a brief summary of the content of the corresponding chapter in the mainses text, which introduces the notation and the most important results.
This edition sets out recent developments in East Asian local currency bond markets and discusses the region's economic outlook, the risk of another taper tantrum, and price differences between labeled and unlabeled green bonds. Emerging East Asia's local currency (LCY) bond markets expanded to an aggregate USD21.7 trillion at the end of September 2021, posting growth of 3.4% quarter-on-quarter, up from 2.9% in the previous quarter. LCY bond issuance rose 6.8% quarter-on-quarter to USD2.4 trillion in Q3 2021. Sustainable bond markets in ASEAN+3 also continued to expand to reach a size of USD388.7 billion at the end of September.
The State of Working America, prepared biennially since 1988 by the Economic Policy Institute, includes a wide variety of data on family incomes, wages, taxes, unemployment, wealth, and poverty-data that enable the authors to closely examine the effect of the economy on the living standards of the American people. This edition, like the previous ones, exposes and analyzes the most recent and critical trends in the country. Praise for previous editions of The State of Working America "The State of Working America remains unrivaled as the most-trusted source for a comprehensive understanding of how working Americans and their families are faring in today's economy." Robert B. Reich "It is the inequality of wealth, argue the authors, rather than new technology (as some would have it), that is responsible for the failure of America's workplace to keep pace with the country's economic growth. The State of Working America is a well-written, soundly argued, and important reference book." Library Journal "If you want to know what happened to the economic well-being of the average American in the past decade or so, this is the book for you. It should be required reading for Americans of all political persuasions." Richard Freeman, Harvard University "A truly comprehensive and useful book that provides a reality check on loose statements about U.S. labor markets. It should be cheered by all Americans who earn their living from work." William Wolman, former chief economist, CNBC's Business Week "The State of Working America provides very valuable factual and analytic material on the economic conditions of American workers. It is the very best source of information on this important subject." Ray Marshall, University of Texas, former U.S. Secretary of Labor "An indispensable work . . . on family income, wages, taxes, employment, and the distribution of wealth." Simon Head, New York Review of Books "No matter what political camp you're in, this is the single most valuable book I know of about the state of America, period. It is the most referenced, most influential resource book of its kind." Jeff Madrick, author of The End of Affluence "This book is the single best yardstick for measuring whether or not our economic policies are doing enough to ensure that our economy can, once again, grow for everybody." Richard A. Gephardt "The best place to review the latest developments in changes in the distribution of income and wealth." Lester Thurow "
The business world creates a variety of problematic challenges. "Exploring: Using 2 x 2 Matrices to Analyze Situations" demonstrates ways to target salient issues quickly, which in turn will seize the attention of senior executives. Ruth Williams, program director for a large multi-national corporation, explores techniques to show how to resolve difficult circumstances and provides an alternative to traditional reports generally used for explaining complex situations. "Exploring" gives the average businessperson tools they need to learn how to assess situations "before" problems arise, bringing together a number of 2 x 2 matrix explorations, which all represent real situations. Implementing a 2 x 2 square, divided into four equal quadrants, Williams has formulated a system that is able to analyze a complex problem by breaking it into smaller units. These particulars will teach you how to observe and analyze in order to be prepared for actual times of crisis. You'll also learn how to assimilate information quickly to help you identify the key drivers of the situation. With her extensive experience in assessing difficult situations by establishing the cause of the problems and identifying possible solutions, Williams offers a valuable resource for any business with "Exploring."
The important data of economics are in the form of time series; therefore, the statistical methods used will have to be those designed for time series data. New methods for analyzing series containing no trends have been developed by communication engineering, and much recent research has been devoted to adapting and extending these methods so that they will be suitable for use with economic series. This book presents the important results of this research and further advances the application of the recently developed Theory of Spectra to economics. In particular, Professor Hatanaka demonstrates the new technique in treating two problems-business cycle indicators, and the acceleration principle existing in department store data. Originally published in 1964. The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.
Der "Schnell" behandelt Techniken zur graphischen Darstellung von Daten oder statistischer Grosse im Rahmen von Datenanalysen. Diese "Datenanalysegraphik" ist ein nutzliches Instrument fur Datenanalytiker, hier wiederum bevorzugt solche in den Sozialwissenschaften."
The growth of machines and users of the Internet has led to the proliferation of all sorts of data concerning individuals, institutions, companies, governments, universities, and all kinds of known objects and events happening everywhere in daily life. Scientific knowledge is not an exception to the data boom. The phenomenon of data growth in science pushes forth as the number of scientific papers published doubles every 9-15 years, and the need for methods and tools to understand what is reported in scientific literature becomes evident. As the number of academicians and innovators swells, so do the number of publications of all types, yielding outlets of documents and depots of authors and institutions that need to be found in Bibliometric databases. These databases are dug into and treated to hand over metrics of research performance by means of Scientometrics that analyze the toil of individuals, institutions, journals, countries, and even regions of the world. The objective of this book is to assist students, professors, university managers, government, industry, and stakeholders in general, understand which are the main Bibliometric databases, what are the key research indicators, and who are the main players in university rankings and the methodologies and approaches that they employ in producing ranking tables. The book is divided into two sections. The first looks at Scientometric databases, including Scopus and Google Scholar as well as institutional repositories. The second section examines the application of Scientometrics to world-class universities and the role that Scientometrics can play in competition among them. It looks at university rankings and the methodologies used to create these rankings. Individual chapters examine specific rankings that include: QS World University Scimago Institutions Webometrics U-Multirank U.S. News & World Report The book concludes with a discussion of university performance in the age of research analytics.
County and City Extra: Special Decennial Census Edition is an essential single-volume source for Census 2020 information. This edition contains easy-to-read geographic summaries of the United States population by race, Hispanic origin, and housing status. It provides the most up-to-date census data for each state, county, metropolitan area, congressional district, and all cities with a population of 25,000 or more. It complements the popular and trusted County and City Extra: Annual Metro, City, and County Data Book, also published by Bernan Press. Features of this publication include: Census data on all states, counties, metropolitan areas, and congressional districts, as well as on cities and towns with populations above 25,000 Key data on over 5,000 geographic areas Ranking tables which present each geography type by various subjects Data from previous censuses for comparative purposes Color maps that help the user understand the data
Best-worst scaling (BWS) is an extension of the method of paired comparison to multiple choices that asks participants to choose both the most and the least attractive options or features from a set of choices. It is an increasingly popular way for academics and practitioners in social science, business, and other disciplines to study and model choice. This book provides an authoritative and systematic treatment of best-worst scaling, introducing readers to the theory and methods for three broad classes of applications. It uses a variety of case studies to illustrate simple but reliable ways to design, implement, apply, and analyze choice data in specific contexts, and showcases the wide range of potential applications across many different disciplines. Best-worst scaling avoids many rating scale problems and will appeal to those wanting to measure subjective quantities with known measurement properties that can be easily interpreted and applied.
Actuaries have access to a wealth of individual data in pension and insurance portfolios, but rarely use its full potential. This book will pave the way, from methods using aggregate counts to modern developments in survival analysis. Based on the fundamental concept of the hazard rate, Part I shows how and why to build statistical models, based on data at the level of the individual persons in a pension scheme or life insurance portfolio. Extensive use is made of the R statistics package. Smooth models, including regression and spline models in one and two dimensions, are covered in depth in Part II. Finally, Part III uses multiple-state models to extend survival models beyond the simple life/death setting, and includes a brief introduction to the modern counting process approach. Practising actuaries will find this book indispensable, and students will find it helpful when preparing for their professional examinations.
The State and Metropolitan Area Data Book is the continuation of the U.S. Census Bureau's discontinued publication. It is a convenient summary of statistics on the social and economic structure of the states, metropolitan areas, and micropolitan areas in the United States. It is designed to serve as a statistical reference and guide to other data publications and sources. This new edition features more than 1,500 data items from a variety of sources. It covers many key topical areas including population, birth and death rates, health coverage, school enrollment, crime rates, income and housing, employment, transportation, and government. The metropolitan area information is based on the latest set of definitions of metropolitan and micropolitan areas including: a complete listing and data for all states, metropolitan areas, including micropolitan areas, and their component counties 2010 census counts and more recent population estimates for all areas results of the 2016 national and state elections expanded vital statistics, communication, and criminal justice data data on migration and commuting habits American Community Survey 1- and 3-year estimates data on health insurance and housing and finance matters accurate and helpful citations to allow the user to directly consult the source source notes and explanations A guide to state statistical abstracts and state information Economic development officials, regional planners, urban researchers, college students, and data users can easily see the trends and changes affecting the nation today.
As one of the first texts to take a behavioral approach to macroeconomic expectations, this book introduces a new way of doing economics. Roetheli uses cognitive psychology in a bottom-up method of modeling macroeconomic expectations. His research is based on laboratory experiments and historical data, which he extends to real-world situations. Pattern extrapolation is shown to be the key to understanding expectations of inflation and income. The quantitative model of expectations is used to analyze the course of inflation and nominal interest rates in a range of countries and historical periods. The model of expected income is applied to the analysis of business cycle phenomena such as the great recession in the United States. Data and spreadsheets are provided for readers to do their own computations of macroeconomic expectations. This book offers new perspectives in many areas of macro and financial economics.
Most textbooks on regression focus on theory and the simplest of examples. Real statistical problems, however, are complex and subtle. This is not a book about the theory of regression. It is about using regression to solve real problems of comparison, estimation, prediction, and causal inference. Unlike other books, it focuses on practical issues such as sample size and missing data and a wide range of goals and techniques. It jumps right in to methods and computer code you can use immediately. Real examples, real stories from the authors' experience demonstrate what regression can do and its limitations, with practical advice for understanding assumptions and implementing methods for experiments and observational studies. They make a smooth transition to logistic regression and GLM. The emphasis is on computation in R and Stan rather than derivations, with code available online. Graphics and presentation aid understanding of the models and model fitting.
The papers in this volume analyze the deployment of Big Data to solve both existing and novel challenges in economic measurement. The existing infrastructure for the production of key economic statistics relies heavily on data collected through sample surveys and periodic censuses, together with administrative records generated in connection with tax administration. The increasing difficulty of obtaining survey and census responses threatens the viability of existing data collection approaches. The growing availability of new sources of Big Data-such as scanner data on purchases, credit card transaction records, payroll information, and prices of various goods scraped from the websites of online sellers-has changed the data landscape. These new sources of data hold the promise of allowing the statistical agencies to produce more accurate, more disaggregated, and more timely economic data to meet the needs of policymakers and other data users. This volume documents progress made toward that goal and the challenges to be overcome to realize the full potential of Big Data in the production of economic statistics. It describes the deployment of Big Data to solve both existing and novel challenges in economic measurement, and it will be of interest to statistical agency staff, academic researchers, and serious users of economic statistics.
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes. These, as well as some of the data sets, are made publicly available on the website accompanying this book.
This book integrates the fundamentals of asymptotic theory of statistical inference for time series under nonstandard settings, e.g., infinite variance processes, not only from the point of view of efficiency but also from that of robustness and optimality by minimizing prediction error. This is the first book to consider the generalized empirical likelihood applied to time series models in frequency domain and also the estimation motivated by minimizing quantile prediction error without assumption of true model. It provides the reader with a new horizon for understanding the prediction problem that occurs in time series modeling and a contemporary approach of hypothesis testing by the generalized empirical likelihood method. Nonparametric aspects of the methods proposed in this book also satisfactorily address economic and financial problems without imposing redundantly strong restrictions on the model, which has been true until now. Dealing with infinite variance processes makes analysis of economic and financial data more accurate under the existing results from the demonstrative research. The scope of applications, however, is expected to apply to much broader academic fields. The methods are also sufficiently flexible in that they represent an advanced and unified development of prediction form including multiple-point extrapolation, interpolation, and other incomplete past forecastings. Consequently, they lead readers to a good combination of efficient and robust estimate and test, and discriminate pivotal quantities contained in realistic time series models.
Das Arbeitsbuch stellt eine Aufgabensammlung mit detaillierten Loesungen zur Einfuhrung in die Angewandte Statistik fur Studenten zur Verfugung. Die Aufgaben umfassen dabei die Themengebiete, welche in etwa in drei Semestern Statistikausbildung gelehrt werden. Damit ist das Arbeitsbuch insbesondere fur Studierende der Wirtschaftswissenschaften, Humanmedizin, Psychologie, Ingenieurswissenschaften sowie Informatik von Interesse. Insgesamt wird durch interessante, teilweise reale und teilweise fiktive Sachverhalte das Lernen des ansonsten eher trockenen vermittelten Stoffes erleichtert. Praktische Aufgaben, die mithilfe der Statistiksoftware R geloest werden mussen, sind besonders gekennzeichnet. Am Ende des Buches gibt es Aufgaben zu gemischten Themengebieten zur Klausurvorbereitung.
Dieses essential erklart das grundlegende Prinzip statistischer Testverfahren. Dabei stehen die Bedeutung der statistischen Signifikanz sowie des p-Wertes im Fokus. Haufig anzutreffende Fehlinterpretationen werden angesprochen. Dadurch wird ersichtlich, was ein signifikantes Ergebnis aussagt und, was es nicht aussagt. Der Leser wird somit befahigt, adaquat mit Testergebnissen umzugehen.
Das erfolgreiche Ubungsbuch ermoglicht es, anhand von praktischen Aufgabenstellungen eine Vielzahl von Methoden der induktiven (schliessenden) Statistik kennenzulernen und zu vertiefen. Die ausfuhrlichen Losungsteile sind so gehalten, dass kein weiteres Buch zu Hilfe genommen werden muss. Aus dem Inhalt: Zufallsereignisse und Wahrscheinlichkeiten. Bedingte Wahrscheinlichkeit, Unabhangigkeit, Bayessche Formel und Zuverlassigkeit von Systemen. Zufallsvariablen und Verteilungen. Spezielle Verteilungen und Grenzwertsatze. Punktschatzer, Konfidenz- und Prognoseintervalle. Parametrische Tests im Einstichprobenfall. Anpassungstests und graphische Verfahren zur Uberprufung einer Verteilungsannahme. Parametrische Vergleiche im Zweistichprobenfall. Nichtparametrische, verteilungsfreie Vergleiche in Ein- und Zweistichprobenfall. Abhangigkeitsanalyse, Korrelation und Assoziation. Regressionsanalyse. Kontingenztafelanalyse. Stichprobenverfahren. Klausuraufgaben und Losungen."
|
You may like...
Floods, from Defence to Management…
Jos van Alphen, Eelco van Beek, …
Hardcover
R10,589
Discovery Miles 105 890
Knowledge for Peace - Transitional…
Briony Jones, Ulrike Luhe
Hardcover
R3,142
Discovery Miles 31 420
"Into Life." Franz Rosenzweig on…
Antonios Kalatzis, Enrico Lucca
Hardcover
R4,017
Discovery Miles 40 170
|