|
Showing 1 - 4 of
4 matches in All Departments
The past few years have seen a major change in computing systems,
as growing data volumes and stalling processor speeds require more
and more applications to scale out to clusters. Today, a myriad
data sources, from the Internet to business operations to
scientific instruments, produce large and valuable data streams.
However, the processing capabilities of single machines have not
kept up with the size of data. As a result, organizations
increasingly need to scale out their computations over clusters. At
the same time, the speed and sophistication required of data
processing have grown. In addition to simple queries, complex
algorithms like machine learning and graph analysis are becoming
common. And in addition to batch processing, streaming analysis of
real-time data is required to let organizations take timely action.
Future computing platforms will need to not only scale out
traditional workloads, but support these new applications too. This
book, a revised version of the 2014 ACM Dissertation Award winning
dissertation, proposes an architecture for cluster computing
systems that can tackle emerging data processing workloads at
scale. Whereas early cluster computing systems, like MapReduce,
handled batch processing, our architecture also enables streaming
and interactive queries, while keeping MapReduce's scalability and
fault tolerance. And whereas most deployed systems only support
simple one-pass computations (e.g., SQL queries), ours also extends
to the multi-pass algorithms required for complex analytics like
machine learning. Finally, unlike the specialized systems proposed
for some of these workloads, our architecture allows these
computations to be combined, enabling rich new applications that
intermix, for example, streaming and batch processing. We achieve
these results through a simple extension to MapReduce that adds
primitives for data sharing, called Resilient Distributed Datasets
(RDDs). We show that this is enough to capture a wide range of
workloads. We implement RDDs in the open source Spark system, which
we evaluate using synthetic and real workloads. Spark matches or
exceeds the performance of specialized systems in many domains,
while offering stronger fault tolerance properties and allowing
these workloads to be combined. Finally, we examine the generality
of RDDs from both a theoretical modeling perspective and a systems
perspective. This version of the dissertation makes corrections
throughout the text and adds a new section on the evolution of
Apache Spark in industry since 2014. In addition, editing,
formatting, and links for the references have been added.
The past few years have seen a major change in computing systems,
as growing data volumes and stalling processor speeds require more
and more applications to scale out to clusters. Today, a myriad
data sources, from the Internet to business operations to
scientific instruments, produce large and valuable data streams.
However, the processing capabilities of single machines have not
kept up with the size of data. As a result, organizations
increasingly need to scale out their computations over clusters. At
the same time, the speed and sophistication required of data
processing have grown. In addition to simple queries, complex
algorithms like machine learning and graph analysis are becoming
common. And in addition to batch processing, streaming analysis of
real-time data is required to let organizations take timely action.
Future computing platforms will need to not only scale out
traditional workloads, but support these new applications too. This
book, a revised version of the 2014 ACM Dissertation Award winning
dissertation, proposes an architecture for cluster computing
systems that can tackle emerging data processing workloads at
scale. Whereas early cluster computing systems, like MapReduce,
handled batch processing, our architecture also enables streaming
and interactive queries, while keeping MapReduce's scalability and
fault tolerance. And whereas most deployed systems only support
simple one-pass computations (e.g., SQL queries), ours also extends
to the multi-pass algorithms required for complex analytics like
machine learning. Finally, unlike the specialized systems proposed
for some of these workloads, our architecture allows these
computations to be combined, enabling rich new applications that
intermix, for example, streaming and batch processing. We achieve
these results through a simple extension to MapReduce that adds
primitives for data sharing, called Resilient Distributed Datasets
(RDDs). We show that this is enough to capture a wide range of
workloads. We implement RDDs in the open source Spark system, which
we evaluate using synthetic and real workloads. Spark matches or
exceeds the performance of specialized systems in many domains,
while offering stronger fault tolerance properties and allowing
these workloads to be combined. Finally, we examine the generality
of RDDs from both a theoretical modeling perspective and a systems
perspective. This version of the dissertation makes corrections
throughout the text and adds a new section on the evolution of
Apache Spark in industry since 2014. In addition, editing,
formatting, and links for the references have been added.
Learn how to use, deploy, and maintain Apache Spark with this
comprehensive guide, written by the creators of the open-source
cluster-computing framework. With an emphasis on improvements and
new features in Spark 2.0, authors Bill Chambers and Matei Zaharia
break down Spark topics into distinct sections, each with unique
goals. You'll explore the basic operations and common functions of
Spark's structured APIs, as well as Structured Streaming, a new
high-level API for building end-to-end streaming applications.
Developers and system administrators will learn the fundamentals of
monitoring, tuning, and debugging Spark, and explore machine
learning techniques and scenarios for employing MLlib, Spark's
scalable machine-learning library. Get a gentle overview of big
data and Spark Learn about DataFrames, SQL, and Datasets-Spark's
core APIs-through worked examples Dive into Spark's low-level APIs,
RDDs, and execution of SQL and DataFrames Understand how Spark runs
on a cluster Debug, monitor, and tune Spark clusters and
applications Learn the power of Structured Streaming, Spark's
stream-processing engine Learn how you can apply MLlib to a variety
of problems, including classification or recommendation
Train, test, run, track, store, tune, deploy, and explain
provenance-aware deep learning models and pipelines at scale with
reproducibility using MLflow Key Features Focus on deep learning
models and MLflow to develop practical business AI solutions at
scale Ship deep learning pipelines from experimentation to
production with provenance tracking Learn to train, run, tune and
deploy deep learning pipelines with explainability and
reproducibility Book DescriptionThe book starts with an overview of
the deep learning (DL) life cycle and the emerging Machine Learning
Ops (MLOps) field, providing a clear picture of the four pillars of
deep learning: data, model, code, and explainability and the role
of MLflow in these areas. From there onward, it guides you step by
step in understanding the concept of MLflow experiments and usage
patterns, using MLflow as a unified framework to track DL data,
code and pipelines, models, parameters, and metrics at scale.
You'll also tackle running DL pipelines in a distributed execution
environment with reproducibility and provenance tracking, and
tuning DL models through hyperparameter optimization (HPO) with Ray
Tune, Optuna, and HyperBand. As you progress, you'll learn how to
build a multi-step DL inference pipeline with preprocessing and
postprocessing steps, deploy a DL inference pipeline for production
using Ray Serve and AWS SageMaker, and finally create a DL
explanation as a service (EaaS) using the popular Shapley Additive
Explanations (SHAP) toolbox. By the end of this book, you'll have
built the foundation and gained the hands-on experience you need to
develop a DL pipeline solution from initial offline experimentation
to final deployment and production, all within a reproducible and
open source framework. What you will learn Understand MLOps and
deep learning life cycle development Track deep learning models,
code, data, parameters, and metrics Build, deploy, and run deep
learning model pipelines anywhere Run hyperparameter optimization
at scale to tune deep learning models Build production-grade
multi-step deep learning inference pipelines Implement scalable
deep learning explainability as a service Deploy deep learning
batch and streaming inference services Ship practical NLP solutions
from experimentation to production Who this book is forThis book is
for machine learning practitioners including data scientists, data
engineers, ML engineers, and scientists who want to build scalable
full life cycle deep learning pipelines with reproducibility and
provenance tracking using MLflow. A basic understanding of data
science and machine learning is necessary to grasp the concepts
presented in this book.
|
|