![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Meaningful use of advanced Bayesian methods requires a good understanding of the fundamentals. This engaging book explains the ideas that underpin the construction and analysis of Bayesian models, with particular focus on computational methods and schemes. The unique features of the text are the extensive discussion of available software packages combined with a brief but complete and mathematically rigorous introduction to Bayesian inference. The text introduces Monte Carlo methods, Markov chain Monte Carlo methods, and Bayesian software, with additional material on model validation and comparison, transdimensional MCMC, and conditionally Gaussian models. The inclusion of problems makes the book suitable as a textbook for a first graduate-level course in Bayesian computation with a focus on Monte Carlo methods. The extensive discussion of Bayesian software - R/R-INLA, OpenBUGS, JAGS, STAN, and BayesX - makes it useful also for researchers and graduate students from beyond statistics.
The majority of empirical research in economics ignores the potential benefits of nonparametric methods, while the majority of advances in nonparametric theory ignores the problems faced in applied econometrics. This book helps bridge this gap between applied economists and theoretical nonparametric econometricians. It discusses in depth, and in terms that someone with only one year of graduate econometrics can understand, basic to advanced nonparametric methods. The analysis starts with density estimation and motivates the procedures through methods that should be familiar to the reader. It then moves on to kernel regression, estimation with discrete data, and advanced methods such as estimation with panel data and instrumental variables models. The book pays close attention to the issues that arise with programming, computing speed, and application. In each chapter, the methods discussed are applied to actual data, paying attention to presentation of results and potential pitfalls.
County and City Extra: Special Decennial Census Edition is an essential single-volume source for Census 2020 information. This edition contains easy-to-read geographic summaries of the United States population by race, Hispanic origin, and housing status. It provides the most up-to-date census data for each state, county, metropolitan area, congressional district, and all cities with a population of 25,000 or more. It complements the popular and trusted County and City Extra: Annual Metro, City, and County Data Book, also published by Bernan Press. Features of this publication include: Census data on all states, counties, metropolitan areas, and congressional districts, as well as on cities and towns with populations above 25,000 Key data on over 5,000 geographic areas Ranking tables which present each geography type by various subjects Data from previous censuses for comparative purposes Color maps that help the user understand the data
The State and Metropolitan Area Data Book is the continuation of the U.S. Census Bureau's discontinued publication. It is a convenient summary of statistics on the social and economic structure of the states, metropolitan areas, and micropolitan areas in the United States. It is designed to serve as a statistical reference and guide to other data publications and sources. This new edition features more than 1,500 data items from a variety of sources. It covers many key topical areas including population, birth and death rates, health coverage, school enrollment, crime rates, income and housing, employment, transportation, and government. The metropolitan area information is based on the latest set of definitions of metropolitan and micropolitan areas including: a complete listing and data for all states, metropolitan areas, including micropolitan areas, and their component counties 2010 census counts and more recent population estimates for all areas results of the 2016 national and state elections expanded vital statistics, communication, and criminal justice data data on migration and commuting habits American Community Survey 1- and 3-year estimates data on health insurance and housing and finance matters accurate and helpful citations to allow the user to directly consult the source source notes and explanations A guide to state statistical abstracts and state information Economic development officials, regional planners, urban researchers, college students, and data users can easily see the trends and changes affecting the nation today.
Random set theory is a fascinating branch of mathematics that amalgamates techniques from topology, convex geometry, and probability theory. Social scientists routinely conduct empirical work with data and modelling assumptions that reveal a set to which the parameter of interest belongs, but not its exact value. Random set theory provides a coherent mathematical framework to conduct identification analysis and statistical inference in this setting and has become a fundamental tool in econometrics and finance. This is the first book dedicated to the use of the theory in econometrics, written to be accessible for readers without a background in pure mathematics. Molchanov and Molinari define the basics of the theory and illustrate the mathematical concepts by their application in the analysis of econometric models. The book includes sets of exercises to accompany each chapter as well as examples to help readers apply the theory effectively.
The Oxford Handbook of Panel Data examines new developments in the theory and applications of panel data. It includes basic topics like non-stationary panels, co-integration in panels, multifactor panel models, panel unit roots, measurement error in panels, incidental parameters and dynamic panels, spatial panels, nonparametric panel data, random coefficients, treatment effects, sample selection, count panel data, limited dependent variable panel models, unbalanced panel models with interactive effects and influential observations in panel data. Contributors to the Handbook explore applications of panel data to a wide range of topics in economics, including health, labor, marketing, trade, productivity, and macro applications in panels. This Handbook is an informative and comprehensive guide for both those who are relatively new to the field and for those wishing to extend their knowledge to the frontier. It is a trusted and definitive source on panel data, having been edited by Professor Badi Baltagi-widely recognized as one of the foremost econometricians in the area of panel data econometrics. Professor Baltagi has successfully recruited an all-star cast of experts for each of the well-chosen topics in the Handbook.
A variety of different social, natural, and technological systems can be described by the same mathematical framework. This holds from the Internet to food webs and to boards of company directors. In all these situations a graph of the elements of the system and their interconnections displays a universal feature. There are only few elements with many connections, and many elements with few connections. This book presents the experimental evidence of these 'scale-free networks' and provides students and researchers with a corpus of theoretical results and algorithms to analyse and understand these features. The content of this book and the exposition makes it a clear textbook for beginners, and a reference book for the experts.
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes. These, as well as some of the data sets, are made publicly available on the website accompanying this book.
The papers in this volume analyze the deployment of Big Data to solve both existing and novel challenges in economic measurement. The existing infrastructure for the production of key economic statistics relies heavily on data collected through sample surveys and periodic censuses, together with administrative records generated in connection with tax administration. The increasing difficulty of obtaining survey and census responses threatens the viability of existing data collection approaches. The growing availability of new sources of Big Data-such as scanner data on purchases, credit card transaction records, payroll information, and prices of various goods scraped from the websites of online sellers-has changed the data landscape. These new sources of data hold the promise of allowing the statistical agencies to produce more accurate, more disaggregated, and more timely economic data to meet the needs of policymakers and other data users. This volume documents progress made toward that goal and the challenges to be overcome to realize the full potential of Big Data in the production of economic statistics. It describes the deployment of Big Data to solve both existing and novel challenges in economic measurement, and it will be of interest to statistical agency staff, academic researchers, and serious users of economic statistics.
This book integrates the fundamentals of asymptotic theory of statistical inference for time series under nonstandard settings, e.g., infinite variance processes, not only from the point of view of efficiency but also from that of robustness and optimality by minimizing prediction error. This is the first book to consider the generalized empirical likelihood applied to time series models in frequency domain and also the estimation motivated by minimizing quantile prediction error without assumption of true model. It provides the reader with a new horizon for understanding the prediction problem that occurs in time series modeling and a contemporary approach of hypothesis testing by the generalized empirical likelihood method. Nonparametric aspects of the methods proposed in this book also satisfactorily address economic and financial problems without imposing redundantly strong restrictions on the model, which has been true until now. Dealing with infinite variance processes makes analysis of economic and financial data more accurate under the existing results from the demonstrative research. The scope of applications, however, is expected to apply to much broader academic fields. The methods are also sufficiently flexible in that they represent an advanced and unified development of prediction form including multiple-point extrapolation, interpolation, and other incomplete past forecastings. Consequently, they lead readers to a good combination of efficient and robust estimate and test, and discriminate pivotal quantities contained in realistic time series models.
Das Arbeitsbuch stellt eine Aufgabensammlung mit detaillierten Loesungen zur Einfuhrung in die Angewandte Statistik fur Studenten zur Verfugung. Die Aufgaben umfassen dabei die Themengebiete, welche in etwa in drei Semestern Statistikausbildung gelehrt werden. Damit ist das Arbeitsbuch insbesondere fur Studierende der Wirtschaftswissenschaften, Humanmedizin, Psychologie, Ingenieurswissenschaften sowie Informatik von Interesse. Insgesamt wird durch interessante, teilweise reale und teilweise fiktive Sachverhalte das Lernen des ansonsten eher trockenen vermittelten Stoffes erleichtert. Praktische Aufgaben, die mithilfe der Statistiksoftware R geloest werden mussen, sind besonders gekennzeichnet. Am Ende des Buches gibt es Aufgaben zu gemischten Themengebieten zur Klausurvorbereitung.
Dieses essential erklart das grundlegende Prinzip statistischer Testverfahren. Dabei stehen die Bedeutung der statistischen Signifikanz sowie des p-Wertes im Fokus. Haufig anzutreffende Fehlinterpretationen werden angesprochen. Dadurch wird ersichtlich, was ein signifikantes Ergebnis aussagt und, was es nicht aussagt. Der Leser wird somit befahigt, adaquat mit Testergebnissen umzugehen.
Das erfolgreiche Ubungsbuch ermoglicht es, anhand von praktischen Aufgabenstellungen eine Vielzahl von Methoden der induktiven (schliessenden) Statistik kennenzulernen und zu vertiefen. Die ausfuhrlichen Losungsteile sind so gehalten, dass kein weiteres Buch zu Hilfe genommen werden muss. Aus dem Inhalt: Zufallsereignisse und Wahrscheinlichkeiten. Bedingte Wahrscheinlichkeit, Unabhangigkeit, Bayessche Formel und Zuverlassigkeit von Systemen. Zufallsvariablen und Verteilungen. Spezielle Verteilungen und Grenzwertsatze. Punktschatzer, Konfidenz- und Prognoseintervalle. Parametrische Tests im Einstichprobenfall. Anpassungstests und graphische Verfahren zur Uberprufung einer Verteilungsannahme. Parametrische Vergleiche im Zweistichprobenfall. Nichtparametrische, verteilungsfreie Vergleiche in Ein- und Zweistichprobenfall. Abhangigkeitsanalyse, Korrelation und Assoziation. Regressionsanalyse. Kontingenztafelanalyse. Stichprobenverfahren. Klausuraufgaben und Losungen."
This book provides a comprehensive and unified treatment of finite
sample statistics and econometrics, a field that has evolved in the
last five decades. Within this framework, this is the first book
which discusses the basic analytical tools of finite sample
econometrics, and explores their applications to models covered in
a first year graduate course in econometrics, including repression
functions, dynamic models, forecasting, simultaneous equations
models, panel data models, and censored models. Both linear and
nonlinear models, as well as models with normal and non-normal
errors, are studied.
Otto Opitz feiert im Juni 1999 seinen sechzigsten Geburtstag. Aus diesem Anlass haben sich Schuler, ihm nahestehende Kollegen und Freunde ent- schlossen, die nachfolgende Festschrift zu erstellen. Dass sich dabei eine hohe Korrelation zwischen den wissenschaftlichen Interessensgebieten von Otto Opitz und den Themen der eingegangenen Beitrage gezeigt hat, ist nicht er- staunlich und hat die Strukturierung dieses Bandes erleichtert. Ein Auszug seiner wissenschaftlichen Tatigkeiten findet sich am Ende des Bandes. Eines der wichtigsten Betatigungsfelder von Otto Opitz kann mit Daten- analyse und Klassifikation umschrieben werden. Und so, wie sich diese For- schungsrichtung aus der Statistik entwickelt hat, ist in jungster Zeit eine Diskussion zu beobachten, die Data Mining als neues Forschungsgebiet zu etablieren sucht. Zu bevorzugten Anwendungsbereichen von Aktivitaten aus den zuvor genannten Gebieten haben fur Otto Opitz stets Marktforschung und Marketing gehoert, und nicht nur seine Beschaftigung mit Methoden der Bankmarktforschung belegt, dass auch die mit Kapital und Risiko umschreib- bare Forschungsrichtung sein Interesse gefunden hat. Schon in fruhen Arbei- ten von Otto Opitz finden sich spieltheoretische UEberlegungen, die Aus- gangspunkt fur Forschungsaktivitaten in den Gebieten Operations Research und Unternehmensplanung sowie Volkswirtschaftslehre waren. Naturlich ha- ben Entwicklungen in der Informatik seine wissenschaftlichen Arbeiten be- pe-gestutzter Software zur Datenana- einflusst, wobei die Bereitstellung von lyse den Methodeneinsatz in der Lehre unterstutzt hat.
Der Autor hinterfragt die regelmassig auftretenden Falle fehlerhafter Rechnungslegung, welche die Deutsche Prufstelle fur Rechnungslegung (DPR) mit der BaFin aufdeckt. Er erarbeitet dazu ein breites Spektrum an Einflussfaktoren auf fehlerhafte Rechnungslegung und diskutiert bedeutsame Einzelfalle zur Rolle der Entlohnung und des Aktienhandels des Managements. Der Schwerpunkt seines Buches liegt auf einer Untersuchung des Zusammenhangs mit der Unternehmensfinanzierung. Die empirische Auswertung zeigt erstmalig, wie Unternehmen nach der Veroeffentlichung fehlerhafter Rechnungslegung ihre Unternehmensfinanzierung andern. Daruber hinaus liefert der Autor eine Analyse der Fehlermeldungen, die u. a. haufig verletzte Rechnungslegungsregeln aufzeigt.
This publication provides updated statistics on a comprehensive set of economic, financial, social, and environmental measures as well as select indicators for the Sustainable Development Goals (SDGs). The report covers the 49 regional members of ADB. It discusses trends in development progress and the challenges to achieving inclusive and sustainable economic growth across Asia and the Pacific. This 53rd edition looks at how most economies in the region have bounced back to varying degrees from the COVID-19 pandemic. A gradual recovery of cyclical industries, the release of pent-up consumer demand, and increased confidence levels have contributed to developing Asia's economy. To put into practice the "leave no one behind" principle of the Sustainable Development Goals, detailed and informative data is crucial. The 2022 report features a special supplement, Mapping the Public Voice for Development-Natural Language Processing of Social Media Text Data, which explores how natural language processing techniques can be applied to social media text data to map public sentiment and inform development research and policy making.
Das UEbungsbuch stellt eine ausgesuchte Sammlung von Problemstellungen und Loesungen bereit, die durch eine Formelsammlung mit den wichtigsten im Buch verwendeten Formeln abgerundet wird. Zusatzlich wird ein umfangreiches Set von Programmen in R zur Verfugung gestellt, die zur Aufgabenstellung und Loesung geschrieben wurden. Der Anhang des Buches beinhaltet daher auch eine kurze Einfuhrung in die Statistik-Software R. Der Inhalt, Organisation inklusive Kapitelaufteilung orientiert sich an dem bei Springer erschienenem Werk "Statistik fur Bachelor- und Masterstudenten: Eine Einfuhrung fur Wirtschafts- und Sozialwissenschaftler"
This book provides an accessible guide to price index and hedonic techniques, with a focus on how to best apply these techniques and interpret the resulting measures. One goal of this book is to provide first-hand experience at constructing these measures, with guidance on practical issues such as what the ideal data would look like and how best to construct these measures when the data are less than ideal. A related objective is to fill the wide gulf between the necessarily simplistic elementary treatments in textbooks and the very complex discussions found in the theoretical and empirical measurement literature. Here, the theoretical results are summarized in an intuitive way and their numerical importance is illustrated using data and results from existing studies. Finally, while the aim of much of the existing literature is to better understand official price indexes like the Consumer Price Index, the emphasis here is more practical: to provide the needed tools for individuals to apply these techniques on their own. As new datasets become increasingly accessible, tools like these will be needed to obtain summary price measures. Indeed, these techniques have been applied for years in antitrust cases that involve pricing, where economic experts typically have access to large, granular datasets. |
You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)R718 Discovery Miles 7 180
|