|
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Solve real-world data problems and create data-driven workflows for
easy data movement and processing at scale with Azure Data Factory
Key Features Learn how to load and transform data from various
sources, both on-premises and on cloud Use Azure Data Factory's
visual environment to build and manage hybrid ETL pipelines
Discover how to prepare, transform, process, and enrich data to
generate key insights Book DescriptionAzure Data Factory (ADF) is a
modern data integration tool available on Microsoft Azure. This
Azure Data Factory Cookbook helps you get up and running by showing
you how to create and execute your first job in ADF. You'll learn
how to branch and chain activities, create custom activities, and
schedule pipelines. This book will help you to discover the
benefits of cloud data warehousing, Azure Synapse Analytics, and
Azure Data Lake Gen2 Storage, which are frequently used for big
data analytics. With practical recipes, you'll learn how to
actively engage with analytical tools from Azure Data Services and
leverage your on-premise infrastructure with cloud-native tools to
get relevant business insights. As you advance, you'll be able to
integrate the most commonly used Azure Services into ADF and
understand how Azure services can be useful in designing ETL
pipelines. The book will take you through the common errors that
you may encounter while working with ADF and show you how to use
the Azure portal to monitor pipelines. You'll also understand error
messages and resolve problems in connectors and data flows with the
debugging capabilities of ADF. By the end of this book, you'll be
able to use ADF as the main ETL and orchestration tool for your
data warehouse or data platform projects. What you will learn
Create an orchestration and transformation job in ADF Develop,
execute, and monitor data flows using Azure Synapse Create big data
pipelines using Azure Data Lake and ADF Build a machine learning
app with Apache Spark and ADF Migrate on-premises SSIS jobs to ADF
Integrate ADF with commonly used Azure services such as Azure ML,
Azure Logic Apps, and Azure Functions Run big data compute jobs
within HDInsight and Azure Databricks Copy data from AWS S3 and
Google Cloud Storage to Azure Storage using ADF's built-in
connectors Who this book is forThis book is for ETL developers,
data warehouse and ETL architects, software professionals, and
anyone who wants to learn about the common and not-so-common
challenges faced while developing traditional and hybrid ETL
solutions using Microsoft's Azure Data Factory. You'll also find
this book useful if you are looking for recipes to improve or
enhance your existing ETL pipelines. Basic knowledge of data
warehousing is expected.
Plan, design, develop, and manage robust Power BI solutions to
generate meaningful insights and make data-driven decisions.
Purchase of the print or Kindle book includes a free eBook in the
PDF format. Key Features Master the latest dashboarding and
reporting features of Microsoft Power BI Combine data from multiple
sources, create stunning visualizations and publish Power BI apps
to thousands of users Get the most out of Microsoft Power BI with
real-world use cases and examples Book DescriptionMastering
Microsoft Power BI, Second Edition, provides an advanced
understanding of Power BI to get the most out of your data and
maximize business intelligence. This updated edition walks through
each essential phase and component of Power BI, and explores the
latest, most impactful Power BI features. Using best practices and
working code examples, you will connect to data sources, shape and
enhance source data, and develop analytical data models. You will
also learn how to apply custom visuals, implement new DAX commands
and paginated SSRS-style reports, manage application workspaces and
metadata, and understand how content can be staged and securely
distributed via Power BI apps. Furthermore, you will explore top
report and interactive dashboard design practices using features
such as bookmarks and the Power KPI visual, alongside the latest
capabilities of Power BI mobile applications and self-service BI
techniques. Additionally, important management and administration
topics are covered, including application lifecycle management via
Power BI pipelines, the on-premises data gateway, and Power BI
Premium capacity. By the end of this Power BI book, you will be
confident in creating sustainable and impactful charts, tables,
reports, and dashboards with any kind of data using Microsoft Power
BI. What you will learn Build efficient data retrieval and
transformation processes with the Power Query M language and
dataflows Design scalable, user-friendly DirectQuery, import, and
composite data models Create basic and advanced DAX measures Add
ArcGIS Maps to create interesting data stories Build pixel-perfect
paginated reports Discover the capabilities of Power BI mobile
applications Manage and monitor a Power BI environment as a Power
BI administrator Scale up a Power BI solution for an enterprise via
Power BI Premium capacity Who this book is forBusiness Intelligence
professionals and intermediate Power BI users looking to master
Power BI for all their data visualization and dashboarding needs
will find this book useful. An understanding of basic BI concepts
is required and some familiarity with Microsoft Power BI will be
helpful to make the most out of this book.
Solve real-world business problems by learning how to create common
industry key performance indicators and other calculations using
DAX within Microsoft products such as Power BI, SQL Server, and
Excel. Key Features Learn to write sophisticated DAX queries to
solve business intelligence and data analytics challenges Handle
performance issues and optimization within the data model, DAX
calculations and more Solve business issues with Microsoft Excel,
Power BI, and SQL Server using DAX queries Book DescriptionDAX
provides an extra edge by extracting key information from the data
that is already present in your model. Filled with examples of
practical, real-world calculations geared toward business metrics
and key performance indicators, this cookbook features solutions
that you can apply for your own business analysis needs. You'll
learn to write various DAX expressions and functions to understand
how DAX queries work. The book also covers sections on dates, time,
and duration to help you deal with working days, time zones, and
shifts. You'll then discover how to manipulate text and numbers to
create dynamic titles and ranks, and deal with measure totals.
Later, you'll explore common business metrics for finance,
customers, employees, and projects. The book will also show you how
to implement common industry metrics such as days of supply, mean
time between failure, order cycle time and overall equipment
effectiveness. In the concluding chapters, you'll learn to apply
statistical formulas for covariance, kurtosis, and skewness.
Finally, you'll explore advanced DAX patterns for interpolation,
inverse aggregators, inverse slicers, and even forecasting with a
deseasonalized correlation coefficient. By the end of this book,
you'll have the skills you need to use DAX's functionality and
flexibility in business intelligence and data analytics. What you
will learn Understand how to create common calculations for dates,
time, and duration Create key performance indicators (KPIs) and
other business calculations Develop general DAX calculations that
deal with text and numbers Discover new ideas and time-saving
techniques for better calculations and models Perform advanced DAX
calculations for solving statistical measures and other
mathematical formulas Handle errors in DAX and learn how to debug
DAX calculations Understand how to optimize your data models Who
this book is forBusiness users, BI developers, data analysts, and
SQL users who are looking for solutions to the challenges faced
while solving analytical operations using DAX techniques and
patterns will find this book useful. Basic knowledge of the DAX
language and Microsoft services is mandatory.
Learn how to bring your data to life with this hands-on guide to
visual analytics with Tableau Key Features Master the fundamentals
of Tableau Desktop and Tableau Prep Learn how to explore, analyze,
and present data to provide business insights Build your experience
and confidence with hands-on exercises and activities Book
DescriptionLearning Tableau has never been easier, thanks to this
practical introduction to storytelling with data. The Tableau
Workshop breaks down the analytical process into five steps: data
preparation, data exploration, data analysis, interactivity, and
distribution of dashboards. Each stage is addressed with a clear
walkthrough of the key tools and techniques you'll need, as well as
engaging real-world examples, meaningful data, and practical
exercises to give you valuable hands-on experience. As you work
through the book, you'll learn Tableau step by step, studying how
to clean, shape, and combine data, as well as how to choose the
most suitable charts for any given scenario. You'll load data from
various sources and formats, perform data engineering to create new
data that delivers deeper insights, and create interactive
dashboards that engage end-users. All concepts are introduced with
clear, simple explanations and demonstrated through realistic
example scenarios. You'll simulate real-world data science projects
with use cases such as traffic violations, urban populations,
coffee store sales, and air travel delays. By the end of this
Tableau book, you'll have the skills and knowledge to confidently
present analytical results and make data-driven decisions. What you
will learn Become an effective user of Tableau Prep and Tableau
Desktop Load, combine, and process data for analysis and
visualization Understand different types of charts and when to use
them Perform calculations to engineer new data and unlock hidden
insights Add interactivity to your visualizations to make them more
engaging Create holistic dashboards that are detailed and
user-friendly Who this book is forThis book is for anyone who wants
to get started on visual analytics with Tableau. If you're new to
Tableau, this Workshop will get you up and running. If you already
have some experience in Tableau, this book will help fill in any
gaps, consolidate your understanding, and give you extra practice
of key tools.
Tragwerke, die besonderen Belastungen oder au13ergewohnlichen
Kraft- einwirkungen ausgesetzt sind, durch die Kiihnheit ihrer
Konstruktion bzw. eine nicht alltagliche Zweckbestimmung sich
auszeichnen, endlich solche, die aus neuartigem, noch nicht
geniigend durchforschtem Material bestehen, werden unmittelbar vor
Dbergabe an den Betrieb einer amtlichen Belastungsprobe unterzogen,
die in bestimmten Zeitabschnitten wiederholt wird (periodische
Erprobung). In Verbindung mit den Ergebnissen einer plangema13en
Dberpriifung stehen uns damit ausreichende Anhalte zur Abgabe eines
zutreffenden Urteiles iiber die Giite des in Frage stehenden
Bauwerkes zur Verfiigung. Die periodischen Deformationsmessungen an
bereits im Betriebe stehenden Tragwerken geben wieder Aufschlu13
iiber zwischenzeitlich etwa entstandene, optisch nur schwer
wahmehmbare Mangel an lebenswichtigen Traggliedern, mit deren
Bestehen immer dann zu rechnen ist, wenn im Vergleiche mit den
Ergebnissen voran- gegangener Proben unzulassig gro13e
Deformationen auftreten. Die Wichtigkeit und Notwendigkeit solcher
diagnostisch unentbehrlichen Messungen wurde schon friih erkannt.
Fehldiagnosen sind allerdings nicht ausgeschlossen, besonders wenn
die verschiedenen Faktoren, die das Me13ergebnis ma13geblich
beeinflussen, nicht richtig gedeutet und gegeneinander abgeschatzt
werden. Hier setzt die ebenso schwierige wie verantwortungsvolle
Tatigkeit des mit der Priifung und Erprobung betrauten Ingenieurs
ein, der daher nicht nur iiber griindliches Wissen und reiche
Erfahrung, sondern dariiber hinaus auch liber gewisse technisch-
diagnostische F iihigkeiten verfligen mli13te. Andernfalls besteht,
wie die Erfahrung lehrt, die nicht llnbegriindete Befiirchtung, daB
z. B. Rekonstruktionen angeordnet werden, die, abgesehen von den
Kosten, zumindest abwegig, wenn nicht gar system widrig sein
konnen.
A practical guide to implementing a scalable and fast
state-of-the-art analytical data estate Key Features Store and
analyze data with enterprise-grade security and auditing Perform
batch, streaming, and interactive analytics to optimize your big
data solutions with ease Develop and run parallel data processing
programs using real-world enterprise scenarios Book
DescriptionAzure Data Lake, the modern data warehouse architecture,
and related data services on Azure enable organizations to build
their own customized analytical platform to fit any analytical
requirements in terms of volume, speed, and quality. This book is
your guide to learning all the features and capabilities of Azure
data services for storing, processing, and analyzing data
(structured, unstructured, and semi-structured) of any size. You
will explore key techniques for ingesting and storing data and
perform batch, streaming, and interactive analytics. The book also
shows you how to overcome various challenges and complexities
relating to productivity and scaling. Next, you will be able to
develop and run massive data workloads to perform different
actions. Using a cloud-based big data-modern data
warehouse-analytics setup, you will also be able to build secure,
scalable data estates for enterprises. Finally, you will not only
learn how to develop a data warehouse but also understand how to
create enterprise-grade security and auditing big data programs. By
the end of this Azure book, you will have learned how to develop a
powerful and efficient analytical platform to meet enterprise
needs. What you will learn Implement data governance with Azure
services Use integrated monitoring in the Azure Portal and
integrate Azure Data Lake Storage into the Azure Monitor Explore
the serverless feature for ad-hoc data discovery, logical data
warehousing, and data wrangling Implement networking with Synapse
Analytics and Spark pools Create and run Spark jobs with Databricks
clusters Implement streaming using Azure Functions, a serverless
runtime environment on Azure Explore the predefined ML services in
Azure and use them in your app Who this book is forThis book is for
data architects, ETL developers, or anyone who wants to get
well-versed with Azure data services to implement an analytical
data estate for their enterprise. The book will also appeal to data
scientists and data analysts who want to explore all the
capabilities of Azure data services, which can be used to store,
process, and analyze any kind of data. A beginner-level
understanding of data analysis and streaming will be required.
Discover the true power of DAX and build advanced DAX solutions for
practical business scenarios Key Features Solve complex business
problems within Microsoft BI tools including Power BI, SQL Server,
and Excel Develop a conceptual understanding of critical business
data modeling principles Learn the subtleties of Power BI data
visualizations, evaluation context, context transition, and
filtering Book DescriptionThis book helps business analysts
generate powerful and sophisticated analyses from their data using
DAX and get the most out of Microsoft Business Intelligence tools.
Extreme DAX will first teach you the principles of business
intelligence, good model design, and how DAX fits into it all.
Then, you'll launch into detailed examples of DAX in real-world
business scenarios such as inventory calculations, forecasting,
intercompany business, and data security. At each step, senior DAX
experts will walk you through the subtleties involved in working
with Power BI models and common mistakes to look out for as you
build advanced data aggregations. You'll deepen your understanding
of DAX functions, filters, and measures, and how and when they can
be used to derive effective insights. You'll also be provided with
PBIX files for each chapter, so that you can follow along and
explore in your own time. What you will learn Understand data
modeling concepts and structures before you start working with DAX
Grasp how relationships in Power BI models are different from those
in RDBMSes Secure aggregation levels, attributes, and hierarchies
using PATH functions and row-level security Get to grips with the
crucial concept of context Apply advanced context and filtering
functions including TREATAS, GENERATE, and SUMMARIZE Explore
dynamically changing visualizations with helper tables and dynamic
labels and axes Work with week-based calendars and understand
standard time-intelligence Evaluate investments intelligently with
the XNPV and XIRR financial DAX functions Who this book is
forExtreme DAX is written for analysts with a working knowledge of
DAX in Power BI or other Microsoft analytics tools. It will help
you upgrade your knowledge and work with analytical models more
effectively, so you'll need practical experience with DAX before
you can get started.
Implement business intelligence (BI), data modeling, and data
analytics within Microsoft products such as Power BI, SQL Server,
and Excel Key Features Understand the ins and outs of DAX
expressions and querying functions with the help of easy-to-follow
examples Manipulate data of varying complexity and optimize BI
workflows to extract key insights Create, monitor, and improve the
performance of models by writing clean and robust DAX queries Book
DescriptionData Analysis Expressions (DAX) is known for its ability
to increase efficiency by extracting new information from data that
is already present in your model. With this book, you'll learn to
use DAX's functionality and flexibility in the BI and data
analytics domains. You'll start by learning the basics of DAX,
along with understanding the importance of good data models, and
how to write efficient DAX formulas by using variables and
formatting styles. You'll then explore how DAX queries work with
the help of examples. The book will guide you through optimizing
the BI workflow by writing powerful DAX queries. Next, you'll learn
to manipulate and load data of varying complexity within Microsoft
products such as Power BI, SQL Server, and Excel Power Pivot.
You'll then discover how to build and extend your data models to
gain additional insights, before covering progressive DAX syntax
and functions to understand complex relationships in DAX. Later,
you'll focus on important DAX functions, specifically those related
to tables, date and time, filtering, and statistics. Finally,
you'll delve into advanced topics such as how the formula and
storage engines work to optimize queries. By the end of this book,
you'll have gained hands-on experience in employing DAX to enhance
your data models by extracting new information and gaining deeper
insights. What you will learn Understand DAX, from the basics
through to advanced topics, and learn to build effective data
models Write and use DAX functions and expressions with the help of
hands-on examples Discover how to handle errors in your DAX code,
and avoid unwanted results Load data into a data model using Power
BI, Excel Power Pivot, and SSAS Tabular Cover DAX functions such as
date, time, and time intelligence using code examples Gain insights
into data by using DAX to create new information Understand the DAX
VertiPaq engine and how it can help you optimize data models Who
this book is forThis book is for data analysts, business analysts,
BI developers, or SQL users who want to make the best use of DAX in
the BI and data analytics domain with the help of examples. Some
understanding of BI concepts is mandatory to fully understand the
concepts covered in the book.
Discover techniques to summarize the characteristics of your data
using PyPlot, NumPy, SciPy, and pandas Key Features Understand the
fundamental concepts of exploratory data analysis using Python Find
missing values in your data and identify the correlation between
different variables Practice graphical exploratory analysis
techniques using Matplotlib and the Seaborn Python package Book
DescriptionExploratory Data Analysis (EDA) is an approach to data
analysis that involves the application of diverse techniques to
gain insights into a dataset. This book will help you gain
practical knowledge of the main pillars of EDA - data cleaning,
data preparation, data exploration, and data visualization. You'll
start by performing EDA using open source datasets and perform
simple to advanced analyses to turn data into meaningful insights.
You'll then learn various descriptive statistical techniques to
describe the basic characteristics of data and progress to
performing EDA on time-series data. As you advance, you'll learn
how to implement EDA techniques for model development and
evaluation and build predictive models to visualize results. Using
Python for data analysis, you'll work with real-world datasets,
understand data, summarize its characteristics, and visualize it
for business intelligence. By the end of this EDA book, you'll have
developed the skills required to carry out a preliminary
investigation on any dataset, yield insights into data, present
your results with visual aids, and build a model that correctly
predicts future outcomes. What you will learn Import, clean, and
explore data to perform preliminary analysis using powerful Python
packages Identify and transform erroneous data using different data
wrangling techniques Explore the use of multiple regression to
describe non-linear relationships Discover hypothesis testing and
explore techniques of time-series analysis Understand and interpret
results obtained from graphical analysis Build, train, and optimize
predictive models to estimate results Perform complex EDA
techniques on open source datasets Who this book is forThis EDA
book is for anyone interested in data analysis, especially
students, statisticians, data analysts, and data scientists. The
practical concepts presented in this book can be applied in various
disciplines to enhance decision-making processes with data analysis
and synthesis. Fundamental knowledge of Python programming and
statistical concepts is all you need to get started with this book.
Every day, more and more kinds of historical data become available,
opening exciting new avenues of inquiry but also new challenges.
This updated and expanded book describes and demonstrates the ways
these data can be explored to construct cultural heritage
knowledge, for research and in teaching and learning. It helps
humanities scholars to grasp Big Data in order to do their work,
whether that means understanding the underlying algorithms at work
in search engines or designing and using their own tools to process
large amounts of information.Demonstrating what digital tools have
to offer and also what 'digital' does to how we understand the
past, the authors introduce the many different tools and developing
approaches in Big Data for historical and humanistic scholarship,
show how to use them, what to be wary of, and discuss the kinds of
questions and new perspectives this new macroscopic perspective
opens up. Originally authored 'live' online with ongoing feedback
from the wider digital history community, Exploring Big Historical
Data breaks new ground and sets the direction for the conversation
into the future.Exploring Big Historical Data should be the go-to
resource for undergraduate and graduate students confronted by a
vast corpus of data, and researchers encountering these methods for
the first time. It will also offer a helping hand to the interested
individual seeking to make sense of genealogical data or digitized
newspapers, and even the local historical society who are trying to
see the value in digitizing their holdings.
Gain expert guidance on how to successfully develop machine
learning models in Python and build your own unique data platforms
Key Features Gain a full understanding of the model production and
deployment process Build your first machine learning model in just
five minutes and get a hands-on machine learning experience
Understand how to deal with common challenges in data science
projects Book DescriptionWhere there's data, there's insight. With
so much data being generated, there is immense scope to extract
meaningful information that'll boost business productivity and
profitability. By learning to convert raw data into game-changing
insights, you'll open new career paths and opportunities. The Data
Science Workshop begins by introducing different types of projects
and showing you how to incorporate machine learning algorithms in
them. You'll learn to select a relevant metric and even assess the
performance of your model. To tune the hyperparameters of an
algorithm and improve its accuracy, you'll get hands-on with
approaches such as grid search and random search. Next, you'll
learn dimensionality reduction techniques to easily handle many
variables at once, before exploring how to use model ensembling
techniques and create new features to enhance model performance. In
a bid to help you automatically create new features that improve
your model, the book demonstrates how to use the automated feature
engineering tool. You'll also understand how to use the
orchestration and scheduling workflow to deploy machine learning
models in batch. By the end of this book, you'll have the skills to
start working on data science projects confidently. By the end of
this book, you'll have the skills to start working on data science
projects confidently. What you will learn Explore the key
differences between supervised learning and unsupervised learning
Manipulate and analyze data using scikit-learn and pandas libraries
Understand key concepts such as regression, classification, and
clustering Discover advanced techniques to improve the accuracy of
your model Understand how to speed up the process of adding new
features Simplify your machine learning workflow for production Who
this book is forThis is one of the most useful data science books
for aspiring data analysts, data scientists, database engineers,
and business analysts. It is aimed at those who want to kick-start
their careers in data science by quickly learning data science
techniques without going through all the mathematics behind machine
learning algorithms. Basic knowledge of the Python programming
language will help you easily grasp the concepts explained in this
book.
Discover how to describe your data in detail, identify data issues,
and find out how to solve them using commonly used techniques and
tips and tricks Key Features Get well-versed with various data
cleaning techniques to reveal key insights Manipulate data of
different complexities to shape them into the right form as per
your business needs Clean, monitor, and validate large data volumes
to diagnose problems before moving on to data analysis Book
DescriptionGetting clean data to reveal insights is essential, as
directly jumping into data analysis without proper data cleaning
may lead to incorrect results. This book shows you tools and
techniques that you can apply to clean and handle data with Python.
You'll begin by getting familiar with the shape of data by using
practices that can be deployed routinely with most data sources.
Then, the book teaches you how to manipulate data to get it into a
useful form. You'll also learn how to filter and summarize data to
gain insights and better understand what makes sense and what does
not, along with discovering how to operate on data to address the
issues you've identified. Moving on, you'll perform key tasks, such
as handling missing values, validating errors, removing duplicate
data, monitoring high volumes of data, and handling outliers and
invalid dates. Next, you'll cover recipes on using supervised
learning and Naive Bayes analysis to identify unexpected values and
classification errors, and generate visualizations for exploratory
data analysis (EDA) to visualize unexpected values. Finally, you'll
build functions and classes that you can reuse without modification
when you have new data. By the end of this Python book, you'll be
equipped with all the key skills that you need to clean data and
diagnose problems within it. What you will learn Find out how to
read and analyze data from a variety of sources Produce summaries
of the attributes of data frames, columns, and rows Filter data and
select columns of interest that satisfy given criteria Address
messy data issues, including working with dates and missing values
Improve your productivity in Python pandas by using method chaining
Use visualizations to gain additional insights and identify
potential data issues Enhance your ability to learn what is going
on in your data Build user-defined functions and classes to
automate data cleaning Who this book is forThis book is for anyone
looking for ways to handle messy, duplicate, and poor data using
different Python tools and techniques. The book takes a
recipe-based approach to help you to learn how to clean and manage
data. Working knowledge of Python programming is all you need to
get the most out of the book.
Understand the complexities of modern-day data engineering
platforms and explore strategies to deal with them with the help of
use case scenarios led by an industry expert in big data Key
Features Become well-versed with the core concepts of Apache Spark
and Delta Lake for building data platforms Learn how to ingest,
process, and analyze data that can be later used for training
machine learning models Understand how to operationalize data
models in production using curated data Book DescriptionIn the
world of ever-changing data and schemas, it is important to build
data pipelines that can auto-adjust to changes. This book will help
you build scalable data platforms that managers, data scientists,
and data analysts can rely on. Starting with an introduction to
data engineering, along with its key concepts and architectures,
this book will show you how to use Microsoft Azure Cloud services
effectively for data engineering. You'll cover data lake design
patterns and the different stages through which the data needs to
flow in a typical data lake. Once you've explored the main features
of Delta Lake to build data lakes with fast performance and
governance in mind, you'll advance to implementing the lambda
architecture using Delta Lake. Packed with practical examples and
code snippets, this book takes you through real-world examples
based on production scenarios faced by the author in his 10 years
of experience working with big data. Finally, you'll cover data
lake deployment strategies that play an important role in
provisioning the cloud resources and deploying the data pipelines
in a repeatable and continuous way. By the end of this data
engineering book, you'll know how to effectively deal with
ever-changing data and create scalable data pipelines to streamline
data science, ML, and artificial intelligence (AI) tasks. What you
will learn Discover the challenges you may face in the data
engineering world Add ACID transactions to Apache Spark using Delta
Lake Understand effective design strategies to build
enterprise-grade data lakes Explore architectural and design
patterns for building efficient data ingestion pipelines
Orchestrate a data pipeline for preprocessing data using Apache
Spark and Delta Lake APIs Automate deployment and monitoring of
data pipelines in production Get to grips with securing,
monitoring, and managing data pipelines models efficiently Who this
book is forThis book is for aspiring data engineers and data
analysts who are new to the world of data engineering and are
looking for a practical guide to building scalable data platforms.
If you already work with PySpark and want to use Delta Lake for
data engineering, you'll find this book useful. Basic knowledge of
Python, Spark, and SQL is expected.
Comprehensive recipes to give you valuable insights on
Transformers, Reinforcement Learning, and more Key Features Deep
Learning solutions from Kaggle Masters and Google Developer Experts
Get to grips with the fundamentals including variables, matrices,
and data sources Learn advanced techniques to make your algorithms
faster and more accurate Book DescriptionThe independent recipes in
Machine Learning Using TensorFlow Cookbook will teach you how to
perform complex data computations and gain valuable insights into
your data. Dive into recipes on training models, model evaluation,
sentiment analysis, regression analysis, artificial neural
networks, and deep learning - each using Google's machine learning
library, TensorFlow. This cookbook covers the fundamentals of the
TensorFlow library, including variables, matrices, and various data
sources. You'll discover real-world implementations of Keras and
TensorFlow and learn how to use estimators to train linear models
and boosted trees, both for classification and regression. Explore
the practical applications of a variety of deep learning
architectures, such as recurrent neural networks and Transformers,
and see how they can be used to solve computer vision and natural
language processing (NLP) problems. With the help of this book, you
will be proficient in using TensorFlow, understand deep learning
from the basics, and be able to implement machine learning
algorithms in real-world scenarios. What you will learn Take
TensorFlow into production Implement and fine-tune Transformer
models for various NLP tasks Apply reinforcement learning
algorithms using the TF-Agents framework Understand linear
regression techniques and use Estimators to train linear models
Execute neural networks and improve predictions on tabular data
Master convolutional neural networks and recurrent neural networks
through practical recipes Who this book is forIf you are a data
scientist or a machine learning engineer, and you want to skip
detailed theoretical explanations in favor of building
production-ready machine learning models using TensorFlow, this
book is for you. Basic familiarity with Python, linear algebra,
statistics, and machine learning is necessary to make the most out
of this book.
|
|