|
Books > Business & Economics > Economics > Econometrics > Economic statistics
Das Schreiben einer quantitativ-empirischen Graduierungsarbeit ist
wie das Kochen in einer Mensa. Es sollte schnell gehen, das Essen
schmecken, gesund und kostengunstig sein. Um das zu erreichen,
mussen Rezepte eingehalten werden. Ohne Leidenschaft, aber
professionell. Dieses essential gibt Ihnen solche Rezepte an die
Hand, die Sie nur nachkochen mussen. Wie ist eine solche Arbeit
aufgebaut? Wie formuliere ich Hypothesen und wie uberprufe ich
diese korrekt? Es wird der Umgang mit der kostenfreien
Statistiksoftware "R" erklart. Die benoetigte Syntax finden Sie im
essential. Sie mussen sie lediglich kopieren.
Best-worst scaling (BWS) is an extension of the method of paired
comparison to multiple choices that asks participants to choose
both the most and the least attractive options or features from a
set of choices. It is an increasingly popular way for academics and
practitioners in social science, business, and other disciplines to
study and model choice. This book provides an authoritative and
systematic treatment of best-worst scaling, introducing readers to
the theory and methods for three broad classes of applications. It
uses a variety of case studies to illustrate simple but reliable
ways to design, implement, apply, and analyze choice data in
specific contexts, and showcases the wide range of potential
applications across many different disciplines. Best-worst scaling
avoids many rating scale problems and will appeal to those wanting
to measure subjective quantities with known measurement properties
that can be easily interpreted and applied.
Highlights the pitfalls of data analysis and emphasizes the importance of using the appropriate metrics before making key decisions. Big data is often touted as the key to understanding almost every aspect of contemporary life. This critique of "information hubris" shows that even more important than data is finding the right metrics to evaluate it. The author, an expert in environmental design and city planning, examines the many ways in which we measure ourselves and our world. He dissects the metrics we apply to health, worker productivity, our children's education, the quality of our environment, the effectiveness of leaders, the dynamics of the economy, and the overall well-being of the planet. Among the areas where the wrong metrics have led to poor outcomes, he cites the fee-for-service model of health care, corporate cultures that emphasize time spent on the job while overlooking key productivity measures, overreliance on standardized testing in education to the detriment of authentic learning, and a blinkered focus on carbon emissions, which underestimates the impact of industrial damage to our natural world. He also examines various communities and systems that have achieved better outcomes by adjusting the ways in which they measure data. The best results are attained by those that have learned not only what to measure and how to measure it, but what it all means. By highlighting the pitfalls inherent in data analysis, this illuminating book reminds us that not everything that can be counted really counts.
Die Arbeit untersucht multiparadigmatisch das fur Forschung &
Entwicklung spezifische Bilanzierungsverhalten. Ausgangspunkt ist
eine normendeskriptive Analyse der abschlusspolitischen
Gestaltungsspielraume nach internationalen und handelsrechtlichen
Rechnungslegungsvorschriften. In einer breitangelegten
deskriptiv-empirischen Studie wird das F&E-spezifische
Bilanzierungsverhalten deutscher boersennotierter Unternehmen
untersucht. Dabei erstaunt, wie viele Unternehmen ihren
Offenlegungspflichten nicht in vollem Umfang nachkommen. Im Rahmen
der sich anschliessenden positiv-empirischen Studie werden die
Determinanten des F&E-spezifischen Bilanzierungsverhaltens
umfassend eruiert. Die Ergebnisse lassen vermuten, dass das
Bilanzierungsverhalten massgeblich durch die Risikoaversion der
Manager beeinflusst wird.
How to Divide When There Isn't Enough develops a rigorous yet
accessible presentation of the state-of-the-art for the
adjudication of conflicting claims and the theory of taxation. It
covers all aspects one may wish to know about claims problems: the
most important rules, the most important axioms, and how these two
sets are related. More generally, it also serves as an introduction
to the modern theory of economic design, which in the last twenty
years has revolutionized many areas of economics, generating a wide
range of applicable allocations rules that have improved people's
lives in many ways. In developing the theory, the book employs a
variety of techniques that will appeal to both experts and
non-experts. Compiling decades of research into a single framework,
William Thomson provides numerous applications that will open a
large number of avenues for future research.
Dieses Essential fuhrt uber die formale Methode des statistischen
Entscheidens hinaus und klart die Frage, mit welcher Gewissheit der
Ausgang eines Tests ein Problem loesen kann. Wer sich in kurzer
Zeit ein Grundverstandnis der Beurteilenden Statistik erwerben
will, kommt an diesem Buch nicht vorbei. Es setzt am Ursprung der
Stichprobentheorie an, erklart die Bayes-Evidenzmasse des
Hypothesentests und diskutiert ihre Wirksamkeit an
allgemeingultigen Fallen. Der Stoff des Essentials bildet einen
wissenschaftlichen Pfad zur Kunstlichen Intelligenz, der dem Leser
in origineller Weise den objektorientierten Entwurf eines
kunstlichen Entscheidungsagenten darlegt und Einblicke in den Bau
von Softwarekomponenten bietet.
This book reviews the three most popular methods (and their
extensions) in applied economics and other social sciences:
matching, regression discontinuity, and difference in differences.
The book introduces the underlying econometric/statistical ideas,
shows what is identified and how the identified parameters are
estimated, and then illustrates how they are applied with real
empirical examples. The book emphasizes how to implement the three
methods with data: many data and programs are provided in the
online appendix. All readers--theoretical
econometricians/statisticians, applied economists/social-scientists
and researchers/students--will find something useful in the book
from different perspectives.
Dieses Buch fuhrt anwendungsorientiert in die Beschreibende und
Schliessende Statistik, in die Wahrscheinlichkeitsrechnung und in
die Stochastische Modellierung ein und wendet sich insbesondere an
Studierende der Informatik, des Ingenieur- und
Wirtschaftsingenieurwesens sowie der Wirtschaftswissenschaften. Es
ist ein idealer Begleiter zu jeder einsemestrigen Grundvorlesung in
Statistik: Die Autoren stellen die wesentlichen Inhalte und Aspekte
in kurzer und pragnanter Form dar und verzichten bewusst auf
ausfuhrliche Motivationen. Zur UEberprufung des eigenen Wissens
stehen Beispielaufgaben mit detaillierten Loesungen zur Verfugung.
Die durchgesehene Neuauflage wurde um etwa 50
Multiple-Choice-Fragen erweitert. Diese koennen mit der
Springer-Nature-Flashcards-App kostenlos heruntergeladen und
interaktiv genutzt werden und sind auszugsweise auch im Buch
enthalten. Auf diese Weise kann das eigene Verstandnis der Inhalte
getestet werden.
The papers in this volume analyze the deployment of Big Data to
solve both existing and novel challenges in economic measurement.
The existing infrastructure for the production of key economic
statistics relies heavily on data collected through sample surveys
and periodic censuses, together with administrative records
generated in connection with tax administration. The increasing
difficulty of obtaining survey and census responses threatens the
viability of existing data collection approaches. The growing
availability of new sources of Big Data-such as scanner data on
purchases, credit card transaction records, payroll information,
and prices of various goods scraped from the websites of online
sellers-has changed the data landscape. These new sources of data
hold the promise of allowing the statistical agencies to produce
more accurate, more disaggregated, and more timely economic data to
meet the needs of policymakers and other data users. This volume
documents progress made toward that goal and the challenges to be
overcome to realize the full potential of Big Data in the
production of economic statistics. It describes the deployment of
Big Data to solve both existing and novel challenges in economic
measurement, and it will be of interest to statistical agency
staff, academic researchers, and serious users of economic
statistics.
Dieses Lehrbuch vermittelt anwendungsorientiert die Verfahren der
Wahrscheinlichkeitsrechnung und Induktiven Statistik. Anhand
zahlreicher Beispiele werden die statistischen Methoden nicht nur
anschaulich dargestellt, sondern ihre Ergebnisse auch ausfuhrlich
interpretiert. Somit eignet sich das Buch hervorragend als
Begleitlekture und zum selbststandigen Nacharbeiten einer Vorlesung
oder auch zum gezielten Nachschlagen bestimmter Fragestellungen. Es
empfiehlt sich auch fur Praktiker, beispielsweise aus der Markt-
und Meinungsforschung und dem Controlling, die sich uber die
Durchfuhrung und Interpretation von statistischen Tests sowie die
Berechnung von Konfidenzintervallen informieren wollen.
As one of the first texts to take a behavioral approach to
macroeconomic expectations, this book introduces a new way of doing
economics. Roetheli uses cognitive psychology in a bottom-up method
of modeling macroeconomic expectations. His research is based on
laboratory experiments and historical data, which he extends to
real-world situations. Pattern extrapolation is shown to be the key
to understanding expectations of inflation and income. The
quantitative model of expectations is used to analyze the course of
inflation and nominal interest rates in a range of countries and
historical periods. The model of expected income is applied to the
analysis of business cycle phenomena such as the great recession in
the United States. Data and spreadsheets are provided for readers
to do their own computations of macroeconomic expectations. This
book offers new perspectives in many areas of macro and financial
economics.
In diesem Lehrbuch werden einerseits die grundlegenden Begriffe und
Verfahren der deskriptiven und induktiven Statistik dargestellt, an
Beispielen erklart und durch Aufgaben mit ausfuhrlichen Loesungen
eingeubt. Andererseits soll die fur ein Statistikbuch recht
ausfuhrliche Verbalisierung - auch in Form von Management Summaries
- dabei helfen, statistische Erhebungs- und Auswertungsprozeduren
kritisch zu reflektieren und so den Informationsgehalt von
statistischen Ergebnissen im Entscheidungszusammenhang bzw. bei der
UEberprufung von Hypothesen werten zu koennen. Die 6. Auflage wurde
aktualisiert und um einen zentralen Aufgabenblock sowohl aus der
deskriptiven als auch aus der induktiven Statistik erganzt. Das
Lehrbuch ist so konzipiert, dass eine Quer- und
Langsschnittsintegration in die meisten Bachelor-Curricula fur
Wirtschaftswissenschaftler an deutschsprachigen Hochschulen
gewahrleistet ist. Zusatzliche Lern- und Prasentationshilfen im
Internet erleichtern das Selbststudium und den Einsatz des Buches
als Erganzungslekture zu einer entsprechenden Lehrveranstaltung.
The Oxford Handbook of Panel Data examines new developments in the
theory and applications of panel data. It includes basic topics
like non-stationary panels, co-integration in panels, multifactor
panel models, panel unit roots, measurement error in panels,
incidental parameters and dynamic panels, spatial panels,
nonparametric panel data, random coefficients, treatment effects,
sample selection, count panel data, limited dependent variable
panel models, unbalanced panel models with interactive effects and
influential observations in panel data. Contributors to the
Handbook explore applications of panel data to a wide range of
topics in economics, including health, labor, marketing, trade,
productivity, and macro applications in panels. This Handbook is an
informative and comprehensive guide for both those who are
relatively new to the field and for those wishing to extend their
knowledge to the frontier. It is a trusted and definitive source on
panel data, having been edited by Professor Badi Baltagi-widely
recognized as one of the foremost econometricians in the area of
panel data econometrics. Professor Baltagi has successfully
recruited an all-star cast of experts for each of the well-chosen
topics in the Handbook.
In this best-of-breed study guide, leading expert Michael Gregg
helps you master all the topics you need to know to succeed on your
Certified Ethical Hacker Version 9 exam and advance your career in
IT security. Michael's concise, focused approach explains every
exam objective from a real-world perspective, helping you quickly
identify weaknesses and retain everything you need to know. * Every
feature of this book supports both efficient exam preparation and
long-term mastery: * Opening Topics Lists identify the topics you
need to learn in each chapter and list EC-Council's official exam
objectives * Key Topics figures, tables, and lists call attention
to the information that's most crucial for exam success * Exam
Preparation Tasks enable you to review key topics, complete memory
tables, define key terms, work through scenarios, and answer review
questions...going beyond mere facts to master the concepts that are
crucial to passing the exam and enhancing your career * Key Terms
are listed in each chapter and defined in a complete glossary,
explaining all the field's essential terminology This study guide
helps you master all the topics on the latest CEH exam, including *
Ethical hacking basics * Technical foundations of hacking *
Footprinting and scanning * Enumeration and system hacking * Linux
distro's, such as Kali and automated assessment tools * Trojans and
backdoors * Sniffers, session hijacking, and denial of service *
Web server hacking, web applications, and database attacks *
Wireless technologies, mobile security, and mobile attacks * IDS,
firewalls, and honeypots * Buffer overflows, viruses, and worms *
Cryptographic attacks and defenses * Cloud security and social
engineering
Most textbooks on regression focus on theory and the simplest of
examples. Real statistical problems, however, are complex and
subtle. This is not a book about the theory of regression. It is
about using regression to solve real problems of comparison,
estimation, prediction, and causal inference. Unlike other books,
it focuses on practical issues such as sample size and missing data
and a wide range of goals and techniques. It jumps right in to
methods and computer code you can use immediately. Real examples,
real stories from the authors' experience demonstrate what
regression can do and its limitations, with practical advice for
understanding assumptions and implementing methods for experiments
and observational studies. They make a smooth transition to
logistic regression and GLM. The emphasis is on computation in R
and Stan rather than derivations, with code available online.
Graphics and presentation aid understanding of the models and model
fitting.
A variety of different social, natural, and technological systems
can be described by the same mathematical framework. This holds
from the Internet to food webs and to boards of company directors.
In all these situations a graph of the elements of the system and
their interconnections displays a universal feature. There are only
few elements with many connections, and many elements with few
connections. This book presents the experimental evidence of these
'scale-free networks' and provides students and researchers with a
corpus of theoretical results and algorithms to analyse and
understand these features. The content of this book and the
exposition makes it a clear textbook for beginners, and a reference
book for the experts.
This book integrates the fundamentals of asymptotic theory of
statistical inference for time series under nonstandard settings,
e.g., infinite variance processes, not only from the point of view
of efficiency but also from that of robustness and optimality by
minimizing prediction error. This is the first book to consider the
generalized empirical likelihood applied to time series models in
frequency domain and also the estimation motivated by minimizing
quantile prediction error without assumption of true model. It
provides the reader with a new horizon for understanding the
prediction problem that occurs in time series modeling and a
contemporary approach of hypothesis testing by the generalized
empirical likelihood method. Nonparametric aspects of the methods
proposed in this book also satisfactorily address economic and
financial problems without imposing redundantly strong restrictions
on the model, which has been true until now. Dealing with infinite
variance processes makes analysis of economic and financial data
more accurate under the existing results from the demonstrative
research. The scope of applications, however, is expected to apply
to much broader academic fields. The methods are also sufficiently
flexible in that they represent an advanced and unified development
of prediction form including multiple-point extrapolation,
interpolation, and other incomplete past forecastings.
Consequently, they lead readers to a good combination of efficient
and robust estimate and test, and discriminate pivotal quantities
contained in realistic time series models.
Dieses Lehrbuch fuhrt anschaulich in die klassischen Gebiete der
Statistik ein. Je nach Vorkenntnissen und Interesse finden Leser
neben leicht verstandlichen Erklarungen auch mathematische
Herleitungen. Der Autor stellt die Beurteilung von Voraussetzungen
und Rahmenbedingungen sowie die Interpretation der Ergebnisse in
den Vordergrund statt nur Statistik-Rezepturen zu vermitteln. Trotz
hoher Verstandlichkeit beschrankt sich die Themenauswahl keineswegs
auf einfache Inhalte, sondern orientiert sich an den Anforderungen
der akademischen Ausbildung. Empfehlenswert fur Studierende der
Wirtschafts- und Sozialwissenschaften sowie fur Interessenten
anderer Fachgebiete. Die Themenauswahl berucksichtigt insbesondere
den praktischen Bezug zu Beruf und Alltag.
|
|