![]() |
![]() |
Your cart is empty |
||
Books > Business & Economics > Economics > Econometrics > Economic statistics
Highlights the pitfalls of data analysis and emphasizes the importance of using the appropriate metrics before making key decisions. Big data is often touted as the key to understanding almost every aspect of contemporary life. This critique of "information hubris" shows that even more important than data is finding the right metrics to evaluate it. The author, an expert in environmental design and city planning, examines the many ways in which we measure ourselves and our world. He dissects the metrics we apply to health, worker productivity, our children's education, the quality of our environment, the effectiveness of leaders, the dynamics of the economy, and the overall well-being of the planet. Among the areas where the wrong metrics have led to poor outcomes, he cites the fee-for-service model of health care, corporate cultures that emphasize time spent on the job while overlooking key productivity measures, overreliance on standardized testing in education to the detriment of authentic learning, and a blinkered focus on carbon emissions, which underestimates the impact of industrial damage to our natural world. He also examines various communities and systems that have achieved better outcomes by adjusting the ways in which they measure data. The best results are attained by those that have learned not only what to measure and how to measure it, but what it all means. By highlighting the pitfalls inherent in data analysis, this illuminating book reminds us that not everything that can be counted really counts.
Das Buch behandelt die Anlage und Auswertung von Versuchen f r
stetigen normalverteilten Response, f r stetigen Response auf der
Basis von Rangdaten, f r kategorialen, insb. bin ren Response auf
der Basis loglinearer Modelle und f r kategorialen korrelierten
Response auf der Basis von Marginalmodellen und symmetrischen
Regressionsmodellen.
A comprehensive and up-to-date introduction to the mathematics that all economics students need to know Probability theory is the quantitative language used to handle uncertainty and is the foundation of modern statistics. Probability and Statistics for Economists provides graduate and PhD students with an essential introduction to mathematical probability and statistical theory, which are the basis of the methods used in econometrics. This incisive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of the mathematics that every economist needs to know. Covers probability and statistics with mathematical rigor while emphasizing intuitive explanations that are accessible to economics students of all backgrounds Discusses random variables, parametric and multivariate distributions, sampling, the law of large numbers, central limit theory, maximum likelihood estimation, numerical optimization, hypothesis testing, and more Features hundreds of exercises that enable students to learn by doing Includes an in-depth appendix summarizing important mathematical results as well as a wealth of real-world examples Can serve as a core textbook for a first-semester PhD course in econometrics and as a companion book to Bruce E. Hansen's Econometrics Also an invaluable reference for researchers and practitioners
Practical Spreadsheet Modeling Using @Risk provides a guide of how to construct applied decision analysis models in spreadsheets. The focus is on the use of Monte Carlo simulation to provide quantitative assessment of uncertainties and key risk drivers. The book presents numerous examples based on real data and relevant practical decisions in a variety of settings, including health care, transportation, finance, natural resources, technology, manufacturing, retail, and sports and entertainment. All examples involve decision problems where uncertainties make simulation modeling useful to obtain decision insights and explore alternative choices. Good spreadsheet modeling practices are highlighted. The book is suitable for graduate students or advanced undergraduates in business, public policy, health care administration, or any field amenable to simulation modeling of decision problems. The book is also useful for applied practitioners seeking to build or enhance their spreadsheet modeling skills. Features Step-by-step examples of spreadsheet modeling and risk analysis in a variety of fields Description of probabilistic methods, their theoretical foundations, and their practical application in a spreadsheet environment Extensive example models and exercises based on real data and relevant decision problems Comprehensive use of the @Risk software for simulation analysis, including a free one-year educational software license
A fair question to ask of an advocate of subjective Bayesianism (which the author is) is "how would you model uncertainty?" In this book, the author writes about how he has done it using real problems from the past, and offers additional comments about the context in which he was working.
Jeder Kredit birgt fur den Kreditgeber ein Risiko, da unsicher ist, ob der Kreditnehmer seinen Zahlungsverpflichtungen nachkommen wird. Gemessen wird dieses Kreditrisiko mit Hilfe statistischer Methoden. Vor dem Hintergrund Basel II hat die Kreditrisikomessung an Bedeutung gewonnen. Dieses Buch schliesst die Lucke zwischen statistischer Grundlagenliteratur und mathematisch anspruchsvollen Werken. Es bietet einen Einstieg in die Kreditrisikomessung und die dafur notwendige Statistik. Ausgehend von den wichtigsten Begriffen zum Kreditrisiko werden deren statistische Analoga beschrieben. Enthalten sind relevante statistische Verteilungen und eine Einfuhrung in stochastische Prozesse, Portfoliomodelle und Score- bzw. Ratingmodelle. Zahlreiche praxisnahe Beispiele ermoeglichen den idealen Einstieg fur Praktiker und Quereinsteiger.
How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we've gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing-and shows how we can begin to fix the problem. Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging "gaming the stats" or "teaching to the test." That's because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn't work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to-rather than a replacement for-judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial. Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.
Der Band bietet eine allgemein verstandliche Ubersicht uber 100 Jahre Deutsche Statistische Gesellschaft (DStatG). In 17 Kapiteln schildern anerkannte Experten, wie die DStatG zur Begrundung und Fortentwicklung der deutschen Wirtschafts- und Sozialstatistik und zu methodischen Innovationen wie neuere Zeitreihen-, Preisindex- oder Stichprobenverfahren beigetragen hat. Weitere Themen sind die Rolle der DStatG bei der Zusammenfuhrung der Ost- und Weststatistik sowie die Vorbereitung und Durchfuhrung der letzen und der aktuellen Volkszahlung."
Bootstrapping is a conceptually simple statistical technique to increase the quality of estimates, conduct robustness checks and compute standard errors for virtually any statistic. This book provides an intelligible and compact introduction for students, scientists and practitioners. It not only gives a clear explanation of the underlying concepts but also demonstrates the application of bootstrapping using Python and Stata.
Originally published in 1985. Mathematical methods and models to facilitate the understanding of the processes of economic dynamics and prediction were refined considerably over the period before this book was written. The field had grown; and many of the techniques involved became extremely complicated. Areas of particular interest include optimal control, non-linear models, game-theoretic approaches, demand analysis and time-series forecasting. This book presents a critical appraisal of developments and identifies potentially productive new directions for research. It synthesises work from mathematics, statistics and economics and includes a thorough analysis of the relationship between system understanding and predictability.
This book allows those with a basic knowledge of econometrics to learn the main nonparametric and semiparametric techniques used in econometric modelling, and how to apply them correctly. It looks at kernel density estimation, kernel regression, splines, wavelets, and mixture models, and provides useful empirical examples throughout. Using empirical application, several economic topics are addressed, including income distribution, wage equation, economic convergence, the Phillips curve, interest rate dynamics, returns volatility, and housing prices. A helpful appendix also explains how to implement the methods using R. This useful book will appeal to practitioners and researchers who need an accessible introduction to nonparametric and semiparametric econometrics. The practical approach provides an overview of the main techniques without including too much focus on mathematical formulas. It also serves as an accompanying textbook for a basic course, typically at undergraduate or graduate level.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.m
Since the first edition of this book published, Bayesian networks have become even more important for applications in a vast array of fields. This second edition includes new material on influence diagrams, learning from data, value of information, cybersecurity, debunking bad statistics, and much more. Focusing on practical real-world problem-solving and model building, as opposed to algorithms and theory, it explains how to incorporate knowledge with data to develop and use (Bayesian) causal models of risk that provide more powerful insights and better decision making than is possible from purely data-driven solutions. Features Provides all tools necessary to build and run realistic Bayesian network models Supplies extensive example models based on real risk assessment problems in a wide range of application domains provided; for example, finance, safety, systems reliability, law, forensics, cybersecurity and more Introduces all necessary mathematics, probability, and statistics as needed Establishes the basics of probability, risk, and building and using Bayesian network models, before going into the detailed applications A dedicated website contains exercises and worked solutions for all chapters along with numerous other resources. The AgenaRisk software contains a model library with executable versions of all of the models in the book. Lecture slides are freely available to accredited academic teachers adopting the book on their course.
Offers a practical introduction to regression modeling with spatial and spatial-temporal data relevant to research and teaching in the social and economic sciences Focuses on a few key datasets and data analysis using the open source software WinBUGS, R, and GeoDa Provides data and programming codes to allow users to undertake their own analyses Ends each chapter with a set of short exercises and questions for further study
Discover how statistical information impacts decisions in today's business world as Anderson/Sweeney/Williams/Camm/Cochran/Fry/Ohlmann's leading ESSENTIALS OF STATISTICS FOR BUSINESS AND ECONOMICS, 9E connects concepts in each chapter to real-world practice. This edition delivers sound statistical methodology, a proven problem-scenario approach and meaningful applications that reflect the latest developments in business and statistics today. More than 350 new and proven real business examples, a wealth of practical cases and meaningful hands-on exercises highlight statistics in action. You gain practice using leading professional statistical software with exercises and appendices that walk you through using JMP (R) Student Edition 14 and Excel (R) 2016. WebAssign's online course management systems is available separately to further strengthen this business statistics approach and helps you maximize your course success.
Das Buch ist als einfuhrendes Lehrbuch konzipiert. Hauptadressaten sind Studierende wirtschaftswissenschaftlicher Studiengange und Praktiker aus der Wirtschaft, die sich mit grundlegenden Arbeits- und Methodengebieten der quantitativen Datenanalyse vertraut machen wollen. Haufig ist beiden Zielgruppen die Statistik nicht leicht zuganglich. Hier setzt das vorliegende Lehrbuch an, das den Lernprozess durch eine Reihe von Massnahmen gezielt erleichtern soll: Stoffauswahl: fur Wirtschaftler typische Anwendungsgebiete und in der Praxis verwendete und bewahrte statistische Methoden. Kapitelgliederung: systematisch mit Einfuhrung, typischem Praxisbeispiel, Losungsansatze, Methodenbeschreibung, Ergebnisinterpretation, Wurdigung, Formelzusammenstellung, Ubung. Methodenbeschreibung: systematisch mit Konzept (in Worten), Operationalisierung und Formalisierung (in Symbolen), Durchfuhrung am Einfuhrungsbeispiel (in Zahlen). Formeln: Entwicklung von Formeln aus Methodenbeschreibung, mathematische Ableitungen nur dort, wo unverzichtbar. Abbildungen: Tabellen, Diagramme und Bilder zur Veranschaulichung. Ubungen: kapitelweise mit Fragen, Aufgaben und Musterlosungen. Aus dem Inhalt: 1. Beschreibende Statistik mit Grundlagen, Aufbereitung, Prasentation und Auswertung (Kenngrossen) univariater Querschnittdaten, Konzentrationsanalyse, Langsschnittdatenanalyse mit Mass- und Indexzahlen, mehrdimensionale Analyse. 2. Analysierende Statistik mit Regression, Korrelation, Kontingenz, Zeitreihenanalyse und zeitreihenbasierter Prognoserechnung. 3. Wahrscheinlichkeitsanalyse mit Grundlagen, Zufallsgrossen und Wahrscheinlich-keitsverteilungen sowie wichtigen diskreten und stetigen Verteilungsmodellen. 4. Schliessende Statistik mit Stichprobenstatistik, Schatzen und Testen bei univariaten Verteilungen und von Zusammenhangen."
This book presents strategies for analyzing qualitative and mixed methods data with MAXQDA software, and provides guidance on implementing a variety of research methods and approaches, e.g. grounded theory, discourse analysis and qualitative content analysis, using the software. In addition, it explains specific topics, such as transcription, building a coding frame, visualization, analysis of videos, concept maps, group comparisons and the creation of literature reviews. The book is intended for masters and PhD students as well as researchers and practitioners dealing with qualitative data in various disciplines, including the educational and social sciences, psychology, public health, business or economics.
Introduction to statistics with SPSS does not require any prior knowledge of statistics. The book can be rewardingly used in, after or parallel to a course on statistics. A wide range of terms and techniques is covered, including those involved in simple and multiple regression analyses. After studying this book, the student will be able to enter data from a simple research project into a computer, provide an adequate analysis of these data and present a report on the subject.
Despite the unobserved components model (UCM) having many advantages over more popular forecasting techniques based on regression analysis, exponential smoothing, and ARIMA, the UCM is not well known among practitioners outside the academic community. Time Series Modelling with Unobserved Components rectifies this deficiency by giving a practical overview of the UCM approach, covering some theoretical details, several applications, and the software for implementing UCMs. The book's first part discusses introductory time series and prediction theory. Unlike most other books on time series, this text includes a chapter on prediction at the beginning because the problem of predicting is not limited to the field of time series analysis. The second part introduces the UCM, the state space form, and related algorithms. It also provides practical modeling strategies to build and select the UCM that best fits the needs of time series analysts. The third part presents real-world applications, with a chapter focusing on business cycle analysis and the construction of band-pass filters using UCMs. The book also reviews software packages that offer ready-to-use procedures for UCMs as well as systems popular among statisticians and econometricians that allow general estimation of models in state space form. This book demonstrates the numerous benefits of using UCMs to model time series data. UCMs are simple to specify, their results are easy to visualize and communicate to non-specialists, and their forecasting performance is competitive. Moreover, various types of outliers can easily be identified, missing values are effortlessly managed, and working contemporaneously with time series observed at different frequencies poses no problem.
"Wirtschaftsinformatik" vermittelt die Grundlagen des
wirtschaftlichen Erfolgsfaktors "Information" von den Technologien
uber Sicherheitsaspekte bis hin zur Anwendung in Unternehmen. Jedes
Kapitel ist einem fundamentalen Begriff aus der
Wirtschaftsinformatik gewidmet, wobei jeweils das Basiswissen, der
aktuelle Leistungsstand sowie die voraussichtliche Entwicklung
beschrieben werden. Durch die Verwendung von Symbolen und
Randbemerkungen ist eine gute Ubersichtlichkeit gewahrleistet, die
es dem Leser erlaubt, sich schnell und sicher zu orientieren. Die
ubersichtliche Gliederung in Verbindung mit einem umfangreichen
Stichwortverzeichnis ermoglicht die Nutzung des Buches sowohl als
Lehrbuch als auch als Nachschlagewerk.
'Refreshingly clear and engaging' Tim Harford 'Delightful . . . full of unique insights' Prof Sir David Spiegelhalter There's no getting away from statistics. We encounter them every day. We are all users of statistics whether we like it or not. Do missed appointments really cost the NHS GBP1bn per year? What's the difference between the mean gender pay gap and the median gender pay gap? How can we work out if a claim that we use 42 billion single-use plastic straws per year in the UK is accurate? What did the Vote Leave campaign's GBP350m bus really mean? How can we tell if the headline 'Public pensions cost you GBP4,000 a year' is correct? Does snow really cost the UK economy GBP1bn per day? But how do we distinguish statistical fact from fiction? What can we do to decide whether a number, claim or news story is accurate? Without an understanding of data, we cannot truly understand what is going on in the world around us. Written by Anthony Reuben, the BBC's first head of statistics, Statistical is an accessible and empowering guide to challenging the numbers all around us.
Collecting and analyzing data on unemployment, inflation, and inequality help describe the complex world around us. When published by the government, such data are called official statistics. They are reported by the media, used by politicians to lend weight to their arguments, and by economic commentators to opine about the state of society. Despite such widescale use, explanations about how these measures are constructed are seldom provided for a non-technical reader. This Measuring Society book is a short, accessible guide to six topics: jobs, house prices, inequality, prices for goods and services, poverty, and deprivation. Each relates to concepts we use on a personal level to form an understanding of the society in which we live: We need a job, a place to live, and food to eat. Using data from the United States, we answer three basic questions: why, how, and for whom these statistics have been constructed. We add some context and flavor by discussing the historical background. This book provides the reader with a good grasp of these measures. Chaitra H. Nagaraja is an Associate Professor of Statistics at the Gabelli School of Business at Fordham University in New York. Her research interests include house price indices and inequality measurement. Prior to Fordham, Dr. Nagaraja was a researcher at the U.S. Census Bureau. While there, she worked on projects relating to the American Community Survey.
Introduction to Statistical Decision Theory: Utility Theory and Causal Analysis provides the theoretical background to approach decision theory from a statistical perspective. It covers both traditional approaches, in terms of value theory and expected utility theory, and recent developments, in terms of causal inference. The book is specifically designed to appeal to students and researchers that intend to acquire a knowledge of statistical science based on decision theory. Features Covers approaches for making decisions under certainty, risk, and uncertainty Illustrates expected utility theory and its extensions Describes approaches to elicit the utility function Reviews classical and Bayesian approaches to statistical inference based on decision theory Discusses the role of causal analysis in statistical decision theory |
![]() ![]() You may like...
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Kwantitatiewe statistiese tegnieke
Swanepoel Swanepoel, Vivier Vivier, …
Book
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
Paperback
R2,509
Discovery Miles 25 090
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
![]()
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
R2,278
Discovery Miles 22 780
|