|
Books > Reference & Interdisciplinary > Communication studies > Data analysis
A new and important contribution to the re-emergent field of
comparative anthropology, this book argues that comparative
ethnographic methods are essential for more contextually
sophisticated accounts of a number of pressing human concerns
today. The book includes expert accounts from an international team
of scholars, showing how these methods can be used to illuminate
important theoretical and practical projects. Illustrated with
examples of successful inter-disciplinary projects, it highlights
the challenges, benefits, and innovative strategies involved in
working collaboratively across disciplines. Through its focus on
practical methodological and logistical accounts, it will be of
value to both seasoned researchers who seek practical models for
conducting their own cutting-edge comparative research, and to
teachers and students who are looking for first-person accounts of
comparative ethnographic research.
Even though many data analytics tools have been developed in the
past years, their usage in the field of cyber twin warrants new
approaches that consider various aspects including unified data
representation, zero-day attack detection, data sharing across
threat detection systems, real-time analysis, sampling,
dimensionality reduction, resource-constrained data processing, and
time series analysis for anomaly detection. Further study is
required to fully understand the opportunities, benefits, and
difficulties of data analytics and the internet of things in
today's modern world. New Approaches to Data Analytics and Internet
of Things Through Digital Twin considers how data analytics and the
internet of things can be used successfully within the field of
digital twin as well as the potential future directions of these
technologies. Covering key topics such as edge networks, deep
learning, intelligent data analytics, and knowledge discovery, this
reference work is ideal for computer scientists, industry
professionals, researchers, scholars, practitioners, academicians,
instructors, and students.
Learn how to make the right decisions for your business with the
help of Python recipes and the expertise of data leaders Key
Features Learn and practice various clustering techniques to gather
market insights Explore real-life use cases from the business world
to contextualize your learning Work your way through practical
recipes that will reinforce what you have learned Book
DescriptionOne of the most valuable contributions of data science
is toward helping businesses make the right decisions.
Understanding this complicated confluence of two disparate worlds,
as well as a fiercely competitive market, calls for all the
guidance you can get. The Art of Data-Driven Business is your
invaluable guide to gaining a business-driven perspective, as well
as leveraging the power of machine learning (ML) to guide
decision-making in your business. This book provides a common
ground of discussion for several profiles within a company. You'll
begin by looking at how to use Python and its many libraries for
machine learning. Experienced data scientists may want to skip this
short introduction, but you'll soon get to the meat of the book and
explore the many and varied ways ML with Python can be applied to
the domain of business decisions through real-world business
problems that you can tackle by yourself. As you advance, you'll
gain practical insights into the value that ML can provide to your
business, as well as the technical ability to apply a wide variety
of tried-and-tested ML methods. By the end of this Python book,
you'll have learned the value of basing your business decisions on
data-driven methodologies and have developed the Python skills
needed to apply what you've learned in the real world. What you
will learn Create effective dashboards with the seaborn library
Predict whether a customer will cancel their subscription to a
service Analyze key pricing metrics with pandas Recommend the right
products to your customers Determine the costs and benefits of
promotions Segment your customers using clustering algorithms Who
this book is forThis book is for data scientists, machine learning
engineers and developers, data engineers, and business decision
makers who want to apply data science for business process
optimization and develop the skills needed to implement data
science projects in marketing, sales, pricing, customer success, ad
tech, and more from a business perspective. Other professionals
looking to explore how data science can be used to improve business
operations, as well as individuals with technical skills who want
to back their technical proposal with a strong business case will
also find this book useful.
Elementary Statistics: A Guide to Data Analysis Using R provides
students with an introduction to both the field of statistics and
R, one of the most widely used languages for statistical computing,
analysis, and graphing in a variety of fields, including the
sciences, finance, banking, health care, e-commerce, and marketing.
Part I provides an overview of both statistics and R. Part II
focuses on descriptive statistics and probability. In Part III,
students learn about discrete and continuous probability
distributions with chapters addressing probability distributions,
binominal probability distributions, and normal probability
distributions. Part IV speaks to statistical inference with content
covering confidence intervals, hypothesis testing, chi-square tests
and F-distributions. The final part explores additional statistical
inference and assumptions, including correlation, regression, and
nonparametric statistics. Helpful appendices provide students with
an index of terminology, an index of applications, a glossary of
symbols, and a guide to the most common R commands. Elementary
Statistics is an ideal resource for introductory courses in
undergraduate statistics, graduate statistics, and data analysis
across the disciplines.
This report discusses the role computer-assisted personal
interviewing (CAPI) can play in transforming survey data collection
to allow better monitoring of the Sustainable Development Goals.
The first part of this publication provides rigorous quantitative
evidence on why CAPI is a better alternative to the traditional pen
and paper interviewing method, particularly in the context of
nationally representative surveys. The second part discusses the
benefits of delivering CAPI training to statisticians using the
popular massive online open course format. The final part provides
a summary of existing CAPI platforms and offers some preliminary
advice for NSOs to consider when selecting a CAPI platform for
their institution. This is a Special Supplement to the Key
Indicators for Asia and the Pacific 2019.
The social sciences are becoming datafied. The questions once
considered the domain of sociologists are now answered by data
scientists operating on large datasets and breaking with
methodological tradition, for better or worse. The traditional
social sciences, such as sociology or anthropology, are under the
double threat of becoming marginalized or even irrelevant, both
from new methods of research which require more computational
skills and from increasing competition from the corporate world
which gains an additional advantage based on data access. However,
unlike data scientists, sociologists and anthropologists have a
long history of doing qualitative research. The more quantified
datasets we have, the more difficult it is to interpret them
without adding layers of qualitative interpretation. Big Data
therefore needs Thick Data. This book presents the available
arsenal of new methods and tools for studying society both
quantitatively and qualitatively, opening ground for the social
sciences to take the lead in analysing digital behaviour. It shows
that Big Data can and should be supplemented and interpreted
through thick data as well as cultural analysis. Thick Big Data is
critically important for students and researchers in the social
sciences to understand the possibilities of digital analysis, both
in the quantitative and qualitative area, and to successfully build
mixed-methods approaches.
Many people go through life in a rather hit-or-miss fashion,
casting about for ideas to explain why their projects improve or
decline, why they are successful or why they are not. Guessing and
"hunches," however, are not very reliable. And without the
knowledge of how to actually investigate situations, good or bad,
and get the true facts, a person is set adrift in a sea of
unevaluated data. Accurate investigation is, in fact, a rare
commodity. Man's tendency in matters he doesn't understand is to
accept the first proffered explanation, no matter how faulty. Thus
investigatory technology had not actually been practiced or
refined. However, L. Ron Hubbard made a breakthrough in the subject
of logic and reasoning which led to his development of the first
truly effective way to search for and consistently find the actual
causes for things. Knowing how to investigate gives one the power
to navigate through the random facts and opinions and emerge with
the real reasons behind success or failure in any aspect of life.
By really finding out why things are the way they are, one is
therefore able to remedy and improve a situation-any situation.
This is an invaluable technology for people in all walks of life.
This publication presents a case study in East Java, Indonesia,
about ADB's collaboration with local governments and other
stakeholders in monitoring, implementing, raising awareness, and
advocating for the Sustainable Development Goals (SDGs). The SDGs
set global, big-picture targets that nations have committed to
attaining. However, unless action is taken at the local level,
these targets can never be reached. The case study on Lumajang and
Pacitan district demonstrate how ADB has been helping to make data
available and accessible in a visually attractive and
easy-to-understand way for different local stakeholders, thereby
contributing to localizing SDGs.
Explore common and not-so-common data transformation scenarios and
solutions to become well-versed with Tableau Prep and create
efficient and powerful data pipelines Key Features Combine, clean,
and shape data for analysis using self-service data preparation
techniques Become proficient with Tableau Prep for building and
managing data flows across your organization Learn how to combine
multiple data transformations in order to build a robust dataset
Book DescriptionTableau Prep is a tool in the Tableau software
suite, created specifically to develop data pipelines. This book
will describe, in detail, a variety of scenarios that you can apply
in your environment for developing, publishing, and maintaining
complex Extract, Transform and Load (ETL) data pipelines. The book
starts by showing you how to set up Tableau Prep Builder. You'll
learn how to obtain data from various data sources, including
files, databases, and Tableau Extracts. Next, the book demonstrates
how to perform data cleaning and data aggregation in Tableau Prep
Builder. You'll also gain an understanding of Tableau Prep Builder
and how you can leverage it to create data pipelines that prepare
your data for downstream analytics processes, including reporting
and dashboard creation in Tableau. As part of a Tableau Prep flow,
you'll also explore how to use R and Python to implement data
science components inside a data pipeline. In the final chapter,
you'll apply the knowledge you've gained to build two use cases
from scratch, including a data flow for a retail store to prepare a
robust dataset using multiple disparate sources and a data flow for
a call center to perform ad hoc data analysis. By the end of this
book, you'll be able to create, run, and publish Tableau Prep flows
and implement solutions to common problems in data pipelines. What
you will learn Perform data cleaning and preparation techniques for
advanced data analysis Understand how to combine multiple disparate
datasets Prepare data for different Business Intelligence (BI)
tools Apply Tableau Prep's calculation language to create powerful
calculations Use Tableau Prep for ad hoc data analysis and data
science flows Deploy Tableau Prep flows to Tableau Server and
Tableau Online Who this book is forThis book is for business
intelligence professionals, data analysts, and Tableau users
looking to learn Tableau Prep essentials and create data pipelines
or ETL processes using it. Beginner-level knowledge of data
management will be beneficial to understand the concepts covered in
this Tableau cookbook more effectively.
New and expanded edition. An International Bestseller - Over One
Million Copies Sold! Shortlisted for the Financial Times/Goldman
Sachs Business Book of the Year Award. Since Aristotle, we have
fought to understand the causes behind everything. But this
ideology is fading. In the age of big data, we can crunch an
incomprehensible amount of information, providing us with
invaluable insights about the what rather than the why. We're just
starting to reap the benefits: tracking vital signs to foresee
deadly infections, predicting building fires, anticipating the best
moment to buy a plane ticket, seeing inflation in real time and
monitoring social media in order to identify trends. But there is a
dark side to big data. Will it be machines, rather than people,
that make the decisions? How do you regulate an algorithm? What
will happen to privacy? Will individuals be punished for acts they
have yet to commit? In this groundbreaking and fascinating book,
two of the world's most-respected data experts reveal the reality
of a big data world and outline clear and actionable steps that
will equip the reader with the tools needed for this next phase of
human evolution.
Social Network Analysis: Methods and Examples prepares social
science students to conduct their own social network analysis (SNA)
by covering basic methodological tools along with illustrative
examples from various fields. This innovative book takes a
conceptual rather than a mathematical approach as it discusses the
connection between what SNA methods have to offer and how those
methods are used in research design, data collection, and analysis.
Four substantive applications chapters provide examples from
politics, work and organizations, mental and physical health, and
crime and terrorism studies.
Reinforce your understanding of data science and data analysis from
a statistical perspective to extract meaningful insights from your
data using Python programming Key Features Work your way through
the entire data analysis pipeline with statistics concerns in mind
to make reasonable decisions Understand how various data science
algorithms function Build a solid foundation in statistics for data
science and machine learning using Python-based examples Book
DescriptionStatistics remain the backbone of modern analysis tasks,
helping you to interpret the results produced by data science
pipelines. This book is a detailed guide covering the math and
various statistical methods required for undertaking data science
tasks. The book starts by showing you how to preprocess data and
inspect distributions and correlations from a statistical
perspective. You'll then get to grips with the fundamentals of
statistical analysis and apply its concepts to real-world datasets.
As you advance, you'll find out how statistical concepts emerge
from different stages of data science pipelines, understand the
summary of datasets in the language of statistics, and use it to
build a solid foundation for robust data products such as
explanatory models and predictive models. Once you've uncovered the
working mechanism of data science algorithms, you'll cover
essential concepts for efficient data collection, cleaning, mining,
visualization, and analysis. Finally, you'll implement statistical
methods in key machine learning tasks such as classification,
regression, tree-based methods, and ensemble learning. By the end
of this Essential Statistics for Non-STEM Data Analysts book,
you'll have learned how to build and present a self-contained,
statistics-backed data product to meet your business goals. What
you will learn Find out how to grab and load data into an analysis
environment Perform descriptive analysis to extract meaningful
summaries from data Discover probability, parameter estimation,
hypothesis tests, and experiment design best practices Get to grips
with resampling and bootstrapping in Python Delve into statistical
tests with variance analysis, time series analysis, and A/B test
examples Understand the statistics behind popular machine learning
algorithms Answer questions on statistics for data scientist
interviews Who this book is forThis book is an entry-level guide
for data science enthusiasts, data analysts, and anyone starting
out in the field of data science and looking to learn the essential
statistical concepts with the help of simple explanations and
examples. If you're a developer or student with a non-mathematical
background, you'll find this book useful. Working knowledge of the
Python programming language is required.
Spatial Regression Analysis Using Eigenvector Spatial Filtering
provides theoretical foundations and guides practical
implementation of the Moran eigenvector spatial filtering (MESF)
technique. MESF is a novel and powerful spatial statistical
methodology that allows spatial scientists to account for spatial
autocorrelation in their georeferenced data analyses. Its appeal is
in its simplicity, yet its implementation drawbacks include serious
complexities associated with constructing an eigenvector spatial
filter. This book discusses MESF specifications for various
intermediate-level topics, including spatially varying coefficients
models, (non) linear mixed models, local spatial autocorrelation,
space-time models, and spatial interaction models. Spatial
Regression Analysis Using Eigenvector Spatial Filtering is
accompanied by sample R codes and a Windows application with
illustrative datasets so that readers can replicate the examples in
the book and apply the methodology to their own application
projects. It also includes a Foreword by Pierre Legendre.
Models and methods for operational risks assessment and mitigation
are gaining importance in financial institutions, healthcare
organizations, industry, businesses and organisations in general.
This book introduces modern Operational Risk Management and
describes how various data sources of different types, both numeric
and semantic sources such as text can be integrated and analyzed.
The book also demonstrates how Operational Risk Management is
synergetic to other risk management activities such as Financial
Risk Management and Safety Management. Operational Risk Management:
a practical approach to intelligent data analysis provides
practical and tested methodologies for combining structured and
unstructured, semantic-based data, and numeric data, in Operational
Risk Management (OpR) data analysis. Key Features: * The book is
presented in four parts: 1) Introduction to OpR Management, 2) Data
for OpR Management, 3) OpR Analytics and 4) OpR Applications and
its Integration with other Disciplines. * Explores integration of
semantic, unstructured textual data, in Operational Risk
Management. * Provides novel techniques for combining qualitative
and quantitative information to assess risks and design mitigation
strategies. * Presents a comprehensive treatment of "near-misses"
data and incidents in Operational Risk Management. * Looks at case
studies in the financial and industrial sector. * Discusses
application of ontology engineering to model knowledge used in
Operational Risk Management. Many real life examples are presented,
mostly based on the MUSING project co-funded by the EU FP6
Information Society Technology Programme. It provides a unique
multidisciplinary perspective on the important and evolving topic
of Operational Risk Management. The book will be useful to
operational risk practitioners, risk managers in banks, hospitals
and industry looking for modern approaches to risk management that
combine an analysis of structured and unstructured data. The book
will also benefit academics interested in research in this field,
looking for techniques developed in response to real world
problems.
|
|