Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
|
Buy Now
Modern Data Engineering with Apache Spark - A Hands-On Guide for Building Mission-Critical Streaming Applications (Paperback, 1st ed.)
Loot Price: R1,326
Discovery Miles 13 260
You Save: R320
(19%)
|
|
Modern Data Engineering with Apache Spark - A Hands-On Guide for Building Mission-Critical Streaming Applications (Paperback, 1st ed.)
Expected to ship within 10 - 15 working days
|
Leverage Apache Spark within a modern data engineering ecosystem.
This hands-on guide will teach you how to write fully functional
applications, follow industry best practices, and learn the
rationale behind these decisions. With Apache Spark as the
foundation, you will follow a step-by-step journey beginning with
the basics of data ingestion, processing, and transformation, and
ending up with an entire local data platform running Apache Spark,
Apache Zeppelin, Apache Kafka, Redis, MySQL, Minio (S3), and Apache
Airflow. Apache Spark applications solve a wide range of data
problems from traditional data loading and processing to rich
SQL-based analysis as well as complex machine learning workloads
and even near real-time processing of streaming data. Spark fits
well as a central foundation for any data engineering workload.
This book will teach you to write interactive Spark applications
using Apache Zeppelin notebooks, write and compile reusable
applications and modules, and fully test both batch and streaming.
You will also learn to containerize your applications using Docker
and run and deploy your Spark applications using a variety of tools
such as Apache Airflow, Docker and Kubernetes. Reading this book
will empower you to take advantage of Apache Spark to optimize your
data pipelines and teach you to craft modular and testable Spark
applications. You will create and deploy mission-critical streaming
spark applications in a low-stress environment that paves the way
for your own path to production. What You Will Learn Simplify data
transformation with Spark Pipelines and Spark SQL Bridge data
engineering with machine learning Architect modular data pipeline
applications Build reusable application components and libraries
Containerize your Spark applications for consistency and
reliability Use Docker and Kubernetes to deploy your Spark
applications Speed up application experimentation using Apache
Zeppelin and Docker Understand serializable structured data and
data contracts Harness effective strategies for optimizing data in
your data lakes Build end-to-end Spark structured streaming
applications using Redis and Apache Kafka Embrace testing for your
batch and streaming applications Deploy and monitor your Spark
applications Who This Book Is For Professional software engineers
who want to take their current skills and apply them to new and
exciting opportunities within the data ecosystem, practicing data
engineers who are looking for a guiding light while traversing the
many challenges of moving from batch to streaming modes, data
architects who wish to provide clear and concise direction for how
best to harness and use Apache Spark within their organization, and
those interested in the ins and outs of becoming a modern data
engineer in today's fast-paced and data-hungry world
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!
|
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.