![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
Der Band bietet eine allgemein verstandliche Ubersicht uber 100 Jahre Deutsche Statistische Gesellschaft (DStatG). In 17 Kapiteln schildern anerkannte Experten, wie die DStatG zur Begrundung und Fortentwicklung der deutschen Wirtschafts- und Sozialstatistik und zu methodischen Innovationen wie neuere Zeitreihen-, Preisindex- oder Stichprobenverfahren beigetragen hat. Weitere Themen sind die Rolle der DStatG bei der Zusammenfuhrung der Ost- und Weststatistik sowie die Vorbereitung und Durchfuhrung der letzen und der aktuellen Volkszahlung."
Practical Spreadsheet Modeling Using @Risk provides a guide of how to construct applied decision analysis models in spreadsheets. The focus is on the use of Monte Carlo simulation to provide quantitative assessment of uncertainties and key risk drivers. The book presents numerous examples based on real data and relevant practical decisions in a variety of settings, including health care, transportation, finance, natural resources, technology, manufacturing, retail, and sports and entertainment. All examples involve decision problems where uncertainties make simulation modeling useful to obtain decision insights and explore alternative choices. Good spreadsheet modeling practices are highlighted. The book is suitable for graduate students or advanced undergraduates in business, public policy, health care administration, or any field amenable to simulation modeling of decision problems. The book is also useful for applied practitioners seeking to build or enhance their spreadsheet modeling skills. Features Step-by-step examples of spreadsheet modeling and risk analysis in a variety of fields Description of probabilistic methods, their theoretical foundations, and their practical application in a spreadsheet environment Extensive example models and exercises based on real data and relevant decision problems Comprehensive use of the @Risk software for simulation analysis, including a free one-year educational software license
This book allows those with a basic knowledge of econometrics to learn the main nonparametric and semiparametric techniques used in econometric modelling, and how to apply them correctly. It looks at kernel density estimation, kernel regression, splines, wavelets, and mixture models, and provides useful empirical examples throughout. Using empirical application, several economic topics are addressed, including income distribution, wage equation, economic convergence, the Phillips curve, interest rate dynamics, returns volatility, and housing prices. A helpful appendix also explains how to implement the methods using R. This useful book will appeal to practitioners and researchers who need an accessible introduction to nonparametric and semiparametric econometrics. The practical approach provides an overview of the main techniques without including too much focus on mathematical formulas. It also serves as an accompanying textbook for a basic course, typically at undergraduate or graduate level.
Originally published in 1985. Mathematical methods and models to facilitate the understanding of the processes of economic dynamics and prediction were refined considerably over the period before this book was written. The field had grown; and many of the techniques involved became extremely complicated. Areas of particular interest include optimal control, non-linear models, game-theoretic approaches, demand analysis and time-series forecasting. This book presents a critical appraisal of developments and identifies potentially productive new directions for research. It synthesises work from mathematics, statistics and economics and includes a thorough analysis of the relationship between system understanding and predictability.
Plenty of literature review and applications of various tests provided to cover all the aspects of research methodology Various examination questions have been provided Strong Pedagogy along with regular features such as Concept Checks, Text Overviews, Key Terms, Review Questions, Exercises and References Though the book is primarily addressed to students,it will be equally useful to Researchers and Entrepreneurs More than other research textbooks, this book addresses the students' need to comprehend all aspects of the research process which includes Research process, clarification of the research problem, Ethical issues, Survey research, Research report preparation and presentation.
Bootstrapping is a conceptually simple statistical technique to increase the quality of estimates, conduct robustness checks and compute standard errors for virtually any statistic. This book provides an intelligible and compact introduction for students, scientists and practitioners. It not only gives a clear explanation of the underlying concepts but also demonstrates the application of bootstrapping using Python and Stata.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.m
Through use of practical examples and a plainspoken narrative style that minimises the use of maths, this book demystifies data concepts, sources, and methods for public service professionals interested in understanding economic and social issues at the regional level. By blending elements of a general interest book, a textbook, and a reference book, it equips civic leaders, public administrators, urban planners, nonprofit executives, philanthropists, journalists, and graduate students in various public affairs disciplines to wield social and economic data for the benefit of their communities. While numerous books about quantitative research exist, few focus specifically on the public sector. Running the Numbers, in contrast, explores a wide array of topics of regional importance, including economic output, demographics, business structure, labour markets, and income, among many others. To that end, the book stresses practical applications, minimises the use of maths, and employs extended, chapter-length examples that demonstrate how analytical tools can illuminate the social and economic workings of actual American regions.
Contains information for using R software with the examples in the textbook Sampling: Design and Analysis, 3rd edition by Sharon L. Lohr.
Das Buch ist als einfuhrendes Lehrbuch konzipiert. Hauptadressaten sind Studierende wirtschaftswissenschaftlicher Studiengange und Praktiker aus der Wirtschaft, die sich mit grundlegenden Arbeits- und Methodengebieten der quantitativen Datenanalyse vertraut machen wollen. Haufig ist beiden Zielgruppen die Statistik nicht leicht zuganglich. Hier setzt das vorliegende Lehrbuch an, das den Lernprozess durch eine Reihe von Massnahmen gezielt erleichtern soll: Stoffauswahl: fur Wirtschaftler typische Anwendungsgebiete und in der Praxis verwendete und bewahrte statistische Methoden. Kapitelgliederung: systematisch mit Einfuhrung, typischem Praxisbeispiel, Losungsansatze, Methodenbeschreibung, Ergebnisinterpretation, Wurdigung, Formelzusammenstellung, Ubung. Methodenbeschreibung: systematisch mit Konzept (in Worten), Operationalisierung und Formalisierung (in Symbolen), Durchfuhrung am Einfuhrungsbeispiel (in Zahlen). Formeln: Entwicklung von Formeln aus Methodenbeschreibung, mathematische Ableitungen nur dort, wo unverzichtbar. Abbildungen: Tabellen, Diagramme und Bilder zur Veranschaulichung. Ubungen: kapitelweise mit Fragen, Aufgaben und Musterlosungen. Aus dem Inhalt: 1. Beschreibende Statistik mit Grundlagen, Aufbereitung, Prasentation und Auswertung (Kenngrossen) univariater Querschnittdaten, Konzentrationsanalyse, Langsschnittdatenanalyse mit Mass- und Indexzahlen, mehrdimensionale Analyse. 2. Analysierende Statistik mit Regression, Korrelation, Kontingenz, Zeitreihenanalyse und zeitreihenbasierter Prognoserechnung. 3. Wahrscheinlichkeitsanalyse mit Grundlagen, Zufallsgrossen und Wahrscheinlich-keitsverteilungen sowie wichtigen diskreten und stetigen Verteilungsmodellen. 4. Schliessende Statistik mit Stichprobenstatistik, Schatzen und Testen bei univariaten Verteilungen und von Zusammenhangen."
This new edition updates Durbin & Koopman's important text on the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as made up of distinct components such as trend, seasonal, regression elements and disturbance terms, each of which is modelled separately. The techniques that emerge from this approach are very flexible and are capable of handling a much wider range of problems than the main analytical system currently in use for time series analysis, the Box-Jenkins ARIMA system. Additions to this second edition include the filtering of nonlinear and non-Gaussian series. Part I of the book obtains the mean and variance of the state, of a variable intended to measure the effect of an interaction and of regression coefficients, in terms of the observations. Part II extends the treatment to nonlinear and non-normal models. For these, analytical solutions are not available so methods are based on simulation.
Since the first edition of this book published, Bayesian networks have become even more important for applications in a vast array of fields. This second edition includes new material on influence diagrams, learning from data, value of information, cybersecurity, debunking bad statistics, and much more. Focusing on practical real-world problem-solving and model building, as opposed to algorithms and theory, it explains how to incorporate knowledge with data to develop and use (Bayesian) causal models of risk that provide more powerful insights and better decision making than is possible from purely data-driven solutions. Features Provides all tools necessary to build and run realistic Bayesian network models Supplies extensive example models based on real risk assessment problems in a wide range of application domains provided; for example, finance, safety, systems reliability, law, forensics, cybersecurity and more Introduces all necessary mathematics, probability, and statistics as needed Establishes the basics of probability, risk, and building and using Bayesian network models, before going into the detailed applications A dedicated website contains exercises and worked solutions for all chapters along with numerous other resources. The AgenaRisk software contains a model library with executable versions of all of the models in the book. Lecture slides are freely available to accredited academic teachers adopting the book on their course.
Virtually any random process developing chronologically can be viewed as a time series. In economics closing prices of stocks, the cost of money, the jobless rate, and retail sales are just a few examples of many. Developed from course notes and extensively classroom-tested, Applied Time Series Analysis with R, Second Edition includes examples across a variety of fields, develops theory, and provides an R-based software package to aid in addressing time series problems in a broad spectrum of fields. The material is organized in an optimal format for graduate students in statistics as well as in the natural and social sciences to learn to use and understand the tools of applied time series analysis. Features Gives readers the ability to actually solve significant real-world problems Addresses many types of nonstationary time series and cutting-edge methodologies Promotes understanding of the data and associated models rather than viewing it as the output of a "black box" Provides the R package tswge available on CRAN which contains functions and over 100 real and simulated data sets to accompany the book. Extensive help regarding the use of tswge functions is provided in appendices and on an associated website. Over 150 exercises and extensive support for instructors The second edition includes additional real-data examples, uses R-based code that helps students easily analyze data, generate realizations from models, and explore the associated characteristics. It also adds discussion of new advances in the analysis of long memory data and data with time-varying frequencies (TVF).
This textbook provides future data analysts with the tools, methods, and skills needed to answer data-focused, real-life questions; to carry out data analysis; and to visualize and interpret results to support better decisions in business, economics, and public policy. Data wrangling and exploration, regression analysis, machine learning, and causal analysis are comprehensively covered, as well as when, why, and how the methods work, and how they relate to each other. As the most effective way to communicate data analysis, running case studies play a central role in this textbook. Each case starts with an industry-relevant question and answers it by using real-world data and applying the tools and methods covered in the textbook. Learning is then consolidated by 360 practice questions and 120 data exercises. Extensive online resources, including raw and cleaned data and codes for all analysis in Stata, R, and Python, can be found at www.gabors-data-analysis.com.
Bernan Press proudly presents the 15th edition of Employment, Hours, and Earnings: States and Areas, 2020. A special addition to Bernan Press Handbook of U.S. Labor Statistics: Employment, Earnings, Prices, Productivity, and Other Labor Data, this reference is a consolidated wealth of employment information, providing monthly and annual data on hours worked and earnings made by industry, including figures and summary information spanning several years. These data are presented for states and metropolitan statistical areas. This edition features: Nearly 300 tables with data on employment for each state, the District of Columbia, and the nation's seventy-five largest metropolitan statistical areas (MSAs) Detailed, non-seasonally adjusted, industry data organized by month and year Hours and earnings data for each state, by industry An introduction for each state and the District of Columbia that denotes salient data and noteworthy trends, including changes in population and the civilian labor force, industry increases and declines, employment and unemployment statistics, and a chart detailing employment percentages, by industry Ranking of the seventy-five largest MSAs, including census population estimates, unemployment rates, and the percent change in total nonfarm employment, Concise technical notes that explain pertinent facts about the data, including sources, definitions, and significant changes; and provides references for further guidance A comprehensive appendix that details the geographical components of the seventy-five largest MSAs The employment, hours, and earnings data in this publication provide a detailed and timely picture of the fifty states, the District of Columbia, and the nation's seventy-five largest MSAs. These data can be used to analyze key factors affecting state and local economies and to compare national cyclical trends to local-level economic activity. This reference is an excellent source of information for analysts in both the public and private sectors. Readers who are involved in public policy can use these data to determine the health of the economy, to clearly identify which sectors are growing and which are declining, and to determine the need for federal assistance. State and local jurisdictions can use the data to determine the need for services, including training and unemployment assistance, and for planning and budgetary purposes. In addition, the data can be used to forecast tax revenue. In private industry, the data can be used by business owners to compare their business to the economy as a whole; and to identify suitable areas when making decisions about plant locations, wholesale and retail trade outlets, and for locating a particular sector base.
How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we've gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing-and shows how we can begin to fix the problem. Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging "gaming the stats" or "teaching to the test." That's because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn't work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to-rather than a replacement for-judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial. Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.
Introduction to statistics with SPSS does not require any prior knowledge of statistics. The book can be rewardingly used in, after or parallel to a course on statistics. A wide range of terms and techniques is covered, including those involved in simple and multiple regression analyses. After studying this book, the student will be able to enter data from a simple research project into a computer, provide an adequate analysis of these data and present a report on the subject.
This book presents strategies for analyzing qualitative and mixed methods data with MAXQDA software, and provides guidance on implementing a variety of research methods and approaches, e.g. grounded theory, discourse analysis and qualitative content analysis, using the software. In addition, it explains specific topics, such as transcription, building a coding frame, visualization, analysis of videos, concept maps, group comparisons and the creation of literature reviews. The book is intended for masters and PhD students as well as researchers and practitioners dealing with qualitative data in various disciplines, including the educational and social sciences, psychology, public health, business or economics.
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
This book provides a comprehensive account of stochastic filtering as a modeling tool in finance and economics. It aims to present this very important tool with a view to making it more popular among researchers in the disciplines of finance and economics. It is not intended to give a complete mathematical treatment of different stochastic filtering approaches, but rather to describe them in simple terms and illustrate their application with real historical data for problems normally encountered in these disciplines. Beyond laying out the steps to be implemented, the steps are demonstrated in the context of different market segments. Although no prior knowledge in this area is required, the reader is expected to have knowledge of probability theory as well as a general mathematical aptitude.Its simple presentation of complex algorithms required to solve modeling problems in increasingly sophisticated financial markets makes this book particularly valuable as a reference for graduate students and researchers interested in the field. Furthermore, it analyses the model estimation results in the context of the market and contrasts these with contemporary research publications. It is also suitable for use as a text for graduate level courses on stochastic modeling.
Introduction to Statistical Decision Theory: Utility Theory and Causal Analysis provides the theoretical background to approach decision theory from a statistical perspective. It covers both traditional approaches, in terms of value theory and expected utility theory, and recent developments, in terms of causal inference. The book is specifically designed to appeal to students and researchers that intend to acquire a knowledge of statistical science based on decision theory. Features Covers approaches for making decisions under certainty, risk, and uncertainty Illustrates expected utility theory and its extensions Describes approaches to elicit the utility function Reviews classical and Bayesian approaches to statistical inference based on decision theory Discusses the role of causal analysis in statistical decision theory
Collecting and analyzing data on unemployment, inflation, and inequality help describe the complex world around us. When published by the government, such data are called official statistics. They are reported by the media, used by politicians to lend weight to their arguments, and by economic commentators to opine about the state of society. Despite such widescale use, explanations about how these measures are constructed are seldom provided for a non-technical reader. This Measuring Society book is a short, accessible guide to six topics: jobs, house prices, inequality, prices for goods and services, poverty, and deprivation. Each relates to concepts we use on a personal level to form an understanding of the society in which we live: We need a job, a place to live, and food to eat. Using data from the United States, we answer three basic questions: why, how, and for whom these statistics have been constructed. We add some context and flavor by discussing the historical background. This book provides the reader with a good grasp of these measures. Chaitra H. Nagaraja is an Associate Professor of Statistics at the Gabelli School of Business at Fordham University in New York. Her research interests include house price indices and inequality measurement. Prior to Fordham, Dr. Nagaraja was a researcher at the U.S. Census Bureau. While there, she worked on projects relating to the American Community Survey.
This pioneering work gives an insight into the daily work of the national statistical institutions of the old command economies in their endeavour to meet the challenge of transition to a market-oriented system of labour statistics variables and indicators. Distinct from any other publication with statistics on Central and East European countries and the former Soviet Union, it reveals why and how new statistics are being collected and what still has to be done in order to make their national data compatible with the rest of the world. The authors discuss the problems involved in the measurement of employment (in both the state and the private sectors) and unemployment, the collection of reliable wage statistics, and the development of new economic classifications in line with those internationally recognized and adopted. They also make a number of recommendations on how to adapt ILO international standards in order to meet the above needs.
Bayesian Statistical Methods provides data scientists with the foundational and computational tools needed to carry out a Bayesian analysis. This book focuses on Bayesian methods applied routinely in practice including multiple linear regression, mixed effects models and generalized linear models (GLM). The authors include many examples with complete R code and comparisons with analogous frequentist procedures. In addition to the basic concepts of Bayesian inferential methods, the book covers many general topics: Advice on selecting prior distributions Computational methods including Markov chain Monte Carlo (MCMC) Model-comparison and goodness-of-fit measures, including sensitivity to priors Frequentist properties of Bayesian methods Case studies covering advanced topics illustrate the flexibility of the Bayesian approach: Semiparametric regression Handling of missing data using predictive distributions Priors for high-dimensional regression models Computational techniques for large datasets Spatial data analysis The advanced topics are presented with sufficient conceptual depth that the reader will be able to carry out such analysis and argue the relative merits of Bayesian and classical methods. A repository of R code, motivating data sets, and complete data analyses are available on the book's website. Brian J. Reich, Associate Professor of Statistics at North Carolina State University, is currently the editor-in-chief of the Journal of Agricultural, Biological, and Environmental Statistics and was awarded the LeRoy & Elva Martin Teaching Award. Sujit K. Ghosh, Professor of Statistics at North Carolina State University, has over 22 years of research and teaching experience in conducting Bayesian analyses, received the Cavell Brownie mentoring award, and served as the Deputy Director at the Statistical and Applied Mathematical Sciences Institute. |
![]() ![]() You may like...
Madam & Eve 2018 - The Guptas Ate My…
Stephen Francis, Rico Schacherl
Paperback
Better Choices - Ensuring South Africa's…
Greg Mills, Mcebisi Jonas, …
Paperback
Exploring Inductive Risk - Case Studies…
Kevin C. Elliott, Ted Richards
Hardcover
R3,279
Discovery Miles 32 790
|