Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
|
Buy Now
Data Engineering with Apache Spark, Delta Lake, and Lakehouse - Create scalable pipelines that ingest, curate, and aggregate complex data in a timely and secure way (Paperback)
Loot Price: R1,275
Discovery Miles 12 750
|
|
Data Engineering with Apache Spark, Delta Lake, and Lakehouse - Create scalable pipelines that ingest, curate, and aggregate complex data in a timely and secure way (Paperback)
Expected to ship within 10 - 15 working days
|
Donate to Against Period Poverty
Total price: R1,285
Discovery Miles: 12 850
|
Understand the complexities of modern-day data engineering
platforms and explore strategies to deal with them with the help of
use case scenarios led by an industry expert in big data Key
Features Become well-versed with the core concepts of Apache Spark
and Delta Lake for building data platforms Learn how to ingest,
process, and analyze data that can be later used for training
machine learning models Understand how to operationalize data
models in production using curated data Book DescriptionIn the
world of ever-changing data and schemas, it is important to build
data pipelines that can auto-adjust to changes. This book will help
you build scalable data platforms that managers, data scientists,
and data analysts can rely on. Starting with an introduction to
data engineering, along with its key concepts and architectures,
this book will show you how to use Microsoft Azure Cloud services
effectively for data engineering. You'll cover data lake design
patterns and the different stages through which the data needs to
flow in a typical data lake. Once you've explored the main features
of Delta Lake to build data lakes with fast performance and
governance in mind, you'll advance to implementing the lambda
architecture using Delta Lake. Packed with practical examples and
code snippets, this book takes you through real-world examples
based on production scenarios faced by the author in his 10 years
of experience working with big data. Finally, you'll cover data
lake deployment strategies that play an important role in
provisioning the cloud resources and deploying the data pipelines
in a repeatable and continuous way. By the end of this data
engineering book, you'll know how to effectively deal with
ever-changing data and create scalable data pipelines to streamline
data science, ML, and artificial intelligence (AI) tasks. What you
will learn Discover the challenges you may face in the data
engineering world Add ACID transactions to Apache Spark using Delta
Lake Understand effective design strategies to build
enterprise-grade data lakes Explore architectural and design
patterns for building efficient data ingestion pipelines
Orchestrate a data pipeline for preprocessing data using Apache
Spark and Delta Lake APIs Automate deployment and monitoring of
data pipelines in production Get to grips with securing,
monitoring, and managing data pipelines models efficiently Who this
book is forThis book is for aspiring data engineers and data
analysts who are new to the world of data engineering and are
looking for a practical guide to building scalable data platforms.
If you already work with PySpark and want to use Delta Lake for
data engineering, you'll find this book useful. Basic knowledge of
Python, Spark, and SQL is expected.
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!
|
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.