Books > Computing & IT > Computer communications & networking
|
Buy Now
An Architecture for Fast and General Data Processing on Large Clusters (Paperback)
Loot Price: R1,444
Discovery Miles 14 440
|
|
An Architecture for Fast and General Data Processing on Large Clusters (Paperback)
Series: ACM Books
Expected to ship within 10 - 15 working days
|
The past few years have seen a major change in computing systems,
as growing data volumes and stalling processor speeds require more
and more applications to scale out to clusters. Today, a myriad
data sources, from the Internet to business operations to
scientific instruments, produce large and valuable data streams.
However, the processing capabilities of single machines have not
kept up with the size of data. As a result, organizations
increasingly need to scale out their computations over clusters. At
the same time, the speed and sophistication required of data
processing have grown. In addition to simple queries, complex
algorithms like machine learning and graph analysis are becoming
common. And in addition to batch processing, streaming analysis of
real-time data is required to let organizations take timely action.
Future computing platforms will need to not only scale out
traditional workloads, but support these new applications too. This
book, a revised version of the 2014 ACM Dissertation Award winning
dissertation, proposes an architecture for cluster computing
systems that can tackle emerging data processing workloads at
scale. Whereas early cluster computing systems, like MapReduce,
handled batch processing, our architecture also enables streaming
and interactive queries, while keeping MapReduce's scalability and
fault tolerance. And whereas most deployed systems only support
simple one-pass computations (e.g., SQL queries), ours also extends
to the multi-pass algorithms required for complex analytics like
machine learning. Finally, unlike the specialized systems proposed
for some of these workloads, our architecture allows these
computations to be combined, enabling rich new applications that
intermix, for example, streaming and batch processing. We achieve
these results through a simple extension to MapReduce that adds
primitives for data sharing, called Resilient Distributed Datasets
(RDDs). We show that this is enough to capture a wide range of
workloads. We implement RDDs in the open source Spark system, which
we evaluate using synthetic and real workloads. Spark matches or
exceeds the performance of specialized systems in many domains,
while offering stronger fault tolerance properties and allowing
these workloads to be combined. Finally, we examine the generality
of RDDs from both a theoretical modeling perspective and a systems
perspective. This version of the dissertation makes corrections
throughout the text and adds a new section on the evolution of
Apache Spark in industry since 2014. In addition, editing,
formatting, and links for the references have been added.
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!
|
You might also like..
|