|
Books > Computing & IT > Applications of computing > Databases > Data warehousing
Filled with practical, step-by-step instructions and clear
explanations for the most important and useful tasks. A practical
guide with easy-to-follow recipes helping developers to quickly and
effectively collect data from disparate sources such as databases,
files, and applications, and turn the data into a unified format
that is accessible and relevant to end users.Any IT professional
working on PDI and is a valid support for either learning how to
use the command line tools efficiently or for going deeper on some
aspects of the command line tools to help you work better.
Until recently, many people thought big data was a passing fad.
"Data science" was an enigmatic term. Today, big data is taken
seriously, and data science is considered downright sexy. With this
anthology of reports from award-winning journalist Mike Barlow,
you'll appreciate how data science is fundamentally altering our
world, for better and for worse. Barlow paints a picture of the
emerging data space in broad strokes. From new techniques and tools
to the use of data for social good, you'll find out how far data
science reaches. With this anthology, you'll learn how: Analysts
can now get results from their data queries in near real time Indie
manufacturers are blurring the lines between hardware and software
Companies try to balance their desire for rapid innovation with the
need to tighten data security Advanced analytics and low-cost
sensors are transforming equipment maintenance from a cost center
to a profit center CIOs have gradually evolved from order takers to
business innovators New analytics tools let businesses go beyond
data analysis and straight to decision-making Mike Barlow is an
award-winning journalist, author, and communications strategy
consultant. Since launching his own firm, Cumulus Partners, he has
represented major organizations in a number of industries.
Big Data Imperatives, focuses on resolving the key questions on
every one's mind: Which data matters? Do you have enough data
volume to justify the usage? How you want to process this amount of
data? How long do you really need to keep it active for your
analysis, marketing, and BI applications? Big data is emerging from
the realm of one-off projects to mainstream business adoption;
however the real value of big data is not in the overwhelming size
of it, but more in its effective use. Your goal may be to obtain
insight from voluminous data, with billions of loosely-structured
bytes of data coming from different channels spread across
different locations, which needs to be processed until the needle
in the haystack is found.This book addresses the following big data
characteristics: * Very large, distributed aggregations of loosely
structured data -- often incomplete and inaccessible *
Petabytes/Exabytes of data * Millions/billions of people
providing/contributing to the context behind the data * Flat
schema's with few complex interrelationships * Involves
time-stamped events * Made up of incomplete data * Includes
connections between data elements that must be probabilistically
inferred Big data imperatives, explains 'what big data can do'. It
can batch process millions and billions of records both
unstructured and structured much faster and cheaper. Big data
analytics provide a platform, to merge all analysis which enables
data analysis to be more accurate, well-rounded, reliable and
focused on a specific business capability. Big data imperatives,
describes the complementary nature of traditional data warehouses
and big-data analytics platforms and how they feed each other.This
book aims to bring the big data and analytics realms together with
a greater focus on architectures that leverage the scale and power
of big data and the ability to integrate and apply analytics
principles to data which earlier was not accessible. This book, can
also be used as a handbook for practitioners; helping them on
methodology, technical architecture, analytics techniques and best
practices. At the same time, this book intends to hold the interest
of those new to big data and analytics by giving them a deep
insight into the realm of big data. What you'll learn *
Understanding the technology, implementation of big data platforms
and their usage for analytics * Big data architectures * Big data
design patterns * Implementation best practices Who this book is
for This book is designed for IT professionals, data warehousing,
business intelligence professionals, data analysis professionals,
architects, developers and business users
 |
The Semantic Web - ISWC 2010
- 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I
(Paperback, 2011 ed.)
Peter F. Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, …
|
R3,037
Discovery Miles 30 370
|
Ships in 10 - 15 working days
|
|
The two-volume set LNCS 6496 and 6497 constitutes the refereed
proceedings of the 9th International Semantic Web Conference, ISWC
2010, held in Shanghai, China, during November 7-11, 2010. Part I
contains 51 papers out of 578 submissions to the research track.
Part II contains 18 papers out of 66 submissions to the semantic
Web in-use track, 6 papers out of 26 submissions to the doctoral
consortium track, and also 4 invited talks. Each submitted paper
were carefully reviewed. The International Semantic Web Conferences
(ISWC) constitute the major international venue where the latest
research results and technical innovations on all aspects of the
Semantic Web are presented. ISWC brings together researchers,
practitioners, and users from the areas of artificial intelligence,
databases, social networks, distributed computing, Web engineering,
information systems, natural language processing, soft computing,
and human computer interaction to discuss the major challenges and
proposed solutions, the success stories and failures, as well the
visions that can advance research and drive innovation in the
Semantic Web.
Data warehousing and knowledge discovery are increasingly becoming
mission-critical technologies for most organizations, both
commercial and public, as it becomes incre- ingly important to
derive important knowledge from both internal and external data
sources. With the ever growing amount and complexity of the data
and information available for decision making, the process of data
integration, analysis, and knowledge discovery continues to meet
new challenges, leading to a wealth of new and exciting research
challenges within the area. Over the last decade, the International
Conference on Data Warehousing and Knowledge Discovery (DaWaK) has
established itself as one of the most important international
scientific events within data warehousing and knowledge discovery.
DaWaK brings together a wide range of researchers and practitioners
working on these topics. The DaWaK conference series thus serves as
a leading forum for discu- ing novel research results and
experiences within data warehousing and knowledge th discovery.
This year's conference, the 11 International Conference on Data Wa-
housing and Knowledge Discovery (DaWaK 2009), continued the
tradition by d- seminating and discussing innovative models,
methods, algorithms, and solutions to the challenges faced by data
warehousing and knowledge discovery technologies.
The final edition of the incomparable data warehousing and business
intelligence reference, updated and expanded The Kimball Group
Reader, Remastered Collection is the essential reference for data
warehouse and business intelligence design, packed with best
practices, design tips, and valuable insight from industry pioneer
Ralph Kimball and the Kimball Group. This Remastered Collection
represents decades of expert advice and mentoring in data
warehousing and business intelligence, and is the final work to be
published by the Kimball Group. Organized for quick navigation and
easy reference, this book contains nearly 20 years of experience on
more than 300 topics, all fully up-to-date and expanded with 65 new
articles. The discussion covers the complete data
warehouse/business intelligence lifecycle, including project
planning, requirements gathering, system architecture, dimensional
modeling, ETL, and business intelligence analytics, with each group
of articles prefaced by original commentaries explaining their role
in the overall Kimball Group methodology. Data warehousing/business
intelligence industry's current multi-billion dollar value is due
in no small part to the contributions of Ralph Kimball and the
Kimball Group. Their publications are the standards on which the
industry is built, and nearly all data warehouse hardware and
software vendors have adopted their methods in one form or another.
This book is a compendium of Kimball Group expertise, and an
essential reference for anyone in the field. * Learn data
warehousing and business intelligence from the field's pioneers *
Get up to date on best practices and essential design tips * Gain
valuable knowledge on every stage of the project lifecycle * Dig
into the Kimball Group methodology with hands-on guidance Ralph
Kimball and the Kimball Group have continued to refine their
methods and techniques based on thousands of hours of consulting
and training. This Remastered Collection of The Kimball Group
Reader represents their final body of knowledge, and is nothing
less than a vital reference for anyone involved in the field.
Business intelligence (BI) used to be so simple -- in theory
anyway. Integrate and copy data from your transactional systems
into a specialised relational database, apply BI reporting and
query tools and add business users. Job done. No longer. Analytics,
big data and an array of diverse technologies have changed
everything. More importantly, business is insisting on ever more,
ever faster from information and from IT in general. An emerging
biz-tech ecosystem demands that business and IT work together. This
book reflects the new reality that in todays socially complex and
rapidly changing world, business decisions must be based on a
combination of rational and intuitive thinking. Integrating cues
from diverse information sources and tacit knowledge, decision
makers create unique meaning to innovate heuristically at the speed
of thought. This book provides a wealth of new models that business
and IT can use together to design support systems for tomorrows
successful organisations. Dr Barry Devlin, one of the earliest
proponents of data warehousing, goes back to basics to explore how
the modern trinity of information, process and people must be
reinvented and restructured to deliver the value, insight and
innovation required by modern businesses. From here, he develops a
series of novel architectural models that provide a new foundation
for holistic information use across the entire business. From
discovery to analysis and from decision making to action taking, he
defines a fully integrated, closed-loop business environment.
Covering every aspect of business analytics, big data,
collaborative working and more, this book takes over where BI ends
to deliver the definitive framework for information use in the
coming years.
Learn data architecture essentials and prepare for the Salesforce
Certified Data Architect exam with the help of tips and mock test
questions Key Features * Leverage data modelling, Salesforce
database design, and techniques for effective data design * Learn
master data management, Salesforce data management, and how to
include considerations * Get to grips with large data volumes,
performance tuning, and poor performance mitigation techniques Book
Description The Salesforce Data Architect is a prerequisite exam
for the Application Architect half of the Salesforce Certified
Technical Architect credential. This book offers a complete,
up-to-date coverage of the Salesforce Data Architect exam so you
can take it with confidence. The book is written in a clear,
succinct way with self-assessment and practice exam questions,
covering all topics necessary to help you pass the exam with ease.
You'll understand the theory around Salesforce data modeling,
database design, master data management (MDM), Salesforce data
management (SDM), and data governance. Additionally, performance
considerations associated with large data volumes will be covered.
You'll also get to grips with data migration and understand the
supporting theory needed to achieve Salesforce Data Architect
certification. By the end of this Salesforce book, you'll have
covered everything you need to pass the Salesforce Data Architect
certification exam and have a handy, on-the-job desktop reference
guide to re-visit the concepts. What you will learn * Understand
the topics relevant to passing the Data Architect exam * Explore
specialist areas such as large data volumes * Test your knowledge
with the help of exam-like questions * Pick up useful tips and
tricks that can be referred to time and again * Understand the
reasons underlying the way Salesforce data management works *
Discover the techniques that are available for loading massive
amounts of data Who This Book Is For This book is for both aspiring
Salesforce data architects and those already familiar with
Salesforce data architecture who want to pass the exam and have a
reference guide to revisit the material as part of their day-to-day
job. Working knowledge of the Salesforce platform is assumed,
alongside a clear understanding of Salesforce architectural
concepts.
Wettbewerbsvorteile werden in Zukunft nur noch die Unternehmen
erlangen, denen es gelingt, Informationen in Wissen zu verwandeln.
Die zwei Welten Business Intelligence und Knowledge Management
wachsen vor diesem Hintergrund zusammen. Der Herausgeber, Leiter
des Instituts fur Managementinformationssysteme und des Instituts
fur Knowledge Management, zeigt in diesem Buch die zunehmende
Integration der beiden Bereiche. Das Buch bringt damit Transparenz
in einen der groessten IT-Wachstumsmarkte. Mehrere Studien, etwa
des Fraunhofer Instituts, beleuchten den relevanten Markt und geben
wichtige Orientierungshilfen. Anhand einer Vielzahl von Beispielen
wird gezeigt, welchen Nutzen der Einsatz hochentwickelter
Analysewerkzeuge und die Entwicklung von Loesungen fur das
Wissensmanagement heute bereits erbringen. Ebenfalls sehr hilfreich
fur Praktiker ist die umfangreiche Anbieterliste. Einen raschen
UEberblick uber die wichtigsten KM- und BI-Begriffe bietet ferner
das integrierte Glossar.
Data Warehousing ist seit einigen Jahren in vielen Branchen ein
zentrales Thema. Die anfangliche Euphorie tauschte jedoch daruber
hinweg, dass zur praktischen Umsetzung gesicherte Methoden und
Vorgehensmodelle fehlten. Dieses Buch stellt einen Beitrag zur
UEberwindung dieser Lucke zwischen Anspruch und Wirklichkeit dar.
Es gibt im ersten Teil einen UEberblick uber aktuelle Ergebnisse im
Bereich des Data Warehousing mit einem Fokus auf methodischen und
betriebswirtschaftlichen Aspekten. Es finden sich u.a. Beitrage zur
Wirtschaftlichkeitsanalyse, zur organisatorischen Einbettung des
Data Warehousing, zum Datenqualitatsmanagement, zum integrierten
Metadatenmanagement und zu datenschutzrechtlichen Aspekten sowie
ein Beitrag zu moeglichen zukunftigen Entwicklungsrichtungen des
Data Warehousing. Im zweiten Teil berichten Projektleiter
umfangreicher Data Warehousing-Projekte uber Erfahrungen und Best
Practices.
Problemloesungen fur das Top-Management: Das Buch stellt speziell
fur Entscheidungstrager die Nutzungsmoeglichkeiten von
Data-Warahouse-Konzepten vor. Neben den Grundlagen werden vor allem
die Einsatzgebiete, verfugbare Loesungen und praktische Erfahrungen
beschrieben. Das Management speziell aus Konsumguterindustrie und
-handel erhalt so die Moeglichkeit, fur das eigene Unternehmen die
optimale Entscheidung zu treffen.
Develop the must-have skills required for any data scientist to get
the best results from Azure Databricks. Key Features * Learn to
develop and productionize ML pipelines using the Databricks Unified
Analytics platform * See how to use AutoML, Feature Stores, and
MLOps with Databricks * Get a complete understanding of data
governance and model deployment Book Description In this book,
you'll get to grips with Databricks, enabling you to power-up your
organization's data science applications. We'll walk through
applying the Databricks AI and ML stack to real-world use cases for
natural language processing, computer vision, time series data, and
more. We'll dive deep into the complete model development life
cycle for data ingestion and analysis, and get familiar with the
latest offerings of AutoML, Feature Store, and MLStudio, on the
Databricks platform. You'll get hands-on experience implementing
repeatable ML operations (MLOps) pipeline using MLFlow, track model
training and key metrics, and explore real-time ML, anomaly
detection, and streaming analytics with Delta lake and Spark
Structured Streaming. Starting with an overview of Data Science use
cases across different organizations and industries, you will then
be introduced to feature stores, feature tables, and how to access
them. You will see why AutoML is important and how to create a
baseline model with AutoML within Databricks. Utilizing the ML Flow
model registry to manage model versioning and transition to
production will be covered, along with detecting and protecting
against model drift in production environments. By the end of the
book, you will know how to set up your Databricks ML development
and deployment as a CI/CD pipeline. What you will learn * Perform
natural language processing, computer vision, and more * Explore
AutoML, Feature Store, and MLStudio on Databricks * Dive deep into
the complete model development life cycle * Experience implementing
repeatable MLOps pipelines using MLFlow * Track model training and
key metrics * Explore real-time ML, anomaly detection, and
streaming analytics * Learn how to handle model drift Who This Book
Is For In this book we are going to specifically focus on the tools
catering to the Data Scientist persona. Readers who want to learn
how to successfully build and deploy end-end Data Science projects
using the Databricks cloud agnostic unified analytics platform will
benefit from this book, along with AI and Machine Learning
practitioners.
Supercharge and deploy Amazon Redshift Serverless, train and deploy
Machine learning Models using Amazon Redshift ML and run inference
queries at scale. Key Features * Learn to build Multi-Class
Classification Models * Create a model, validate a model and draw
conclusion from K-means clustering * Learn to create a SageMaker
endpoint and use that to create a Redshift ML Model for remote
inference Book Description Amazon Redshift Serverless enables
organizations to run PetaBytes scales Cloud data warehouses in
minutes and in most cost effective way Developers, data analysts
and BI analysts can deploy cloud data warehouses and use
easy-to-use tools to train models and run predictions. Developers
working with Amazon Redshift data warehouses will be able to put
their SQL knowledge to work with this practical guide to train and
deploy Machine Learning Models. The book provides a hands-on
approach to implementation and associated methodologies that will
have you up-and-running, and productive in no time. Complete with
step-by-step explanations of essential concepts, practical examples
and self-assessment questions, you will begin Deploying and Using
Amazon Redshift Serverless and then dive into learning and
deploying various types of Machine learning projects using familiar
SQL Code. You will learn how to configure and deploy Amazon
Redshift Serverless, understand the foundations of data analytics
and types of data machine learning. Then you will deep dive into
Redshift ML By the end of this book, you will be able to configure
and deploy Amazon Redshift Serverless, train and deploy Machine
learning Models using Amazon Redshift ML and run inference queries
at scale. What you will learn * Learn how to implement an
end-to-end serverless architecture for ingestion, analytics and
machine learning using Redshift Serverless and Redshift ML * Learn
how to create supervised and unsupervised models, and various
techniques to influence your model * Learn how to run inference
queries at scale in Redshift to solve a variety of business
problems using models created with Redshift ML or natively in
Amazon SageMaker * Learn how to optimize your Redshift data
warehouse for extreme performance * Learn how to ensure you are
using proper security guidelines with Redshift ML * Learn how to
use model explainability in Amazon Redshift ML, to help understand
how each attribute in your training data contributes to the
predicted result. Who This Book Is For Data Scientists and Machine
Learning developers who work with Amazon Redshift and want to
explore it's machine learning capabilities will find this
definitive guide helpful. Basic understanding of machine learning
techniques and working knowledge of Amazon Redshift is needed to
get the best from this book.
Das anhaltende Interesse an der Theorie monetArer Integration ist
einerseits dem europAischen EinigungsprozeA zu verdanken,
andererseits der InstabilitAt des WeltwAhrungssystems seit dem
Zusammenbruch der Bretton-Woods-Vereinbarung. Die vorherrschende
Theorie des optimalen WAhrungsraumes hat sich jedoch angesichts
neuerer Entwicklungen in der Theorie der Wirtschaftspolitik sowie
in der Theorie des Wechselkurses als zu eng und methodisch
fragwA1/4rdig erwiesen. Das Buch gibt einen umfassenden, auch
fA1/4r AngehArige anderer sozialwissenschaftlichen Disziplinen gut
lesbaren Aoeberblick A1/4ber den Stand der Forschung zur monetAren
Integration im allgemeinen und zur europAischen WAhrungsintegration
im besonderen. Es gibt darA1/4ber hinaus Anregungen fA1/4r
weiterfA1/4hrende Untersuchungen, z.B. zur Rolle der
Arbeitsmarktverfassungen oder der Fiskal- und der Sozialpolitik.
Im Mittelpunkt dieses anwendungsbezogenen Lehrbuchs stehen
Architekturen, Methoden und Werkzeuge entscheidungsunterstutzender
Systeme. Beispiele und Aufgaben ermoglichen die Entwicklung von
Anwendungen mit der Demonstrationssoftware der CD ROM. Eine
interaktive Foliensammlung veranschaulicht den Buchtext und
verweist auf zusatzliches Lernmaterial. Der erste Teil stellt mit
der Nutzwertanalyse (AHP) und Was-Wenn-Analysen traditionelle
entscheidungsunterstutzende Ansatze dar und fuhrt anhand
regelbasierter Systeme in wissensbasierte Systeme ein. Der zweite
und dritte Teil behandeln das Schwerpunktthema Data Warehousing und
Data Mining. Data Warehousing und OLAP bereiten die Inhalte von
Produktionsdatenbanken fur Abfragen und Analysen durch Endbenutzer
auf. Nach einem Uberblick uber die wichtigsten Data
Mining-Verfahren konzentriert sich der dritte Teil auf zwei der
verbreitesten Methoden, die Regelinduktion und neuronale Netze."
Explore how Delta brings reliability, performance, and governance
to your data lake and all the AI and BI use cases built on top of
it Key Features Learn Delta's core concepts and features as well as
what makes it a perfect match for data engineering and analysis
Solve business challenges of different industry verticals using a
scenario-based approach Make optimal choices by understanding the
various tradeoffs provided by Delta Book DescriptionDelta helps you
generate reliable insights at scale and simplifies architecture
around data pipelines, allowing you to focus primarily on refining
the use cases being worked on. This is especially important when
you consider that existing architecture is frequently reused for
new use cases. In this book, you'll learn about the principles of
distributed computing, data modeling techniques, and big data
design patterns and templates that help solve end-to-end data flow
problems for common scenarios and are reusable across use cases and
industry verticals. You'll also learn how to recover from errors
and the best practices around handling structured, semi-structured,
and unstructured data using Delta. After that, you'll get to grips
with features such as ACID transactions on big data, disciplined
schema evolution, time travel to help rewind a dataset to a
different time or version, and unified batch and streaming
capabilities that will help you build agile and robust data
products. By the end of this Delta book, you'll be able to use
Delta as the foundational block for creating analytics-ready data
that fuels all AI/BI use cases. What you will learn Explore the key
challenges of traditional data lakes Appreciate the unique features
of Delta that come out of the box Address reliability, performance,
and governance concerns using Delta Analyze the open data format
for an extensible and pluggable architecture Handle multiple use
cases to support BI, AI, streaming, and data discovery Discover how
common data and machine learning design patterns are executed on
Delta Build and deploy data and machine learning pipelines at scale
using Delta Who this book is forData engineers, data scientists, ML
practitioners, BI analysts, or anyone in the data domain working
with big data will be able to put their knowledge to work with this
practical guide to executing pipelines and supporting diverse use
cases using the Delta protocol. Basic knowledge of SQL, Python
programming, and Spark is required to get the most out of this
book.
Build an end-to-end business solution in the cognitive automation
lifecycle and explore UiPath Document Understanding, UiPath AI
Center, and Druid Key Features Explore out-of-the-box (OOTB) AI
Models in UiPath Learn how to deploy, manage, and continuously
improve machine learning models using UiPath AI Center Deploy
UiPath-integrated chatbots and master UiPath Document Understanding
Book DescriptionArtificial intelligence (AI) enables enterprises to
optimize business processes that are probabilistic, highly
variable, and require cognitive abilities with unstructured data.
Many believe there is a steep learning curve with AI, however, the
goal of our book is to lower the barrier to using AI. This
practical guide to AI with UiPath will help RPA developers and
tech-savvy business users learn how to incorporate cognitive
abilities into business process optimization. With the hands-on
approach of this book, you'll quickly be on your way to
implementing cognitive automation to solve everyday business
problems. Complete with step-by-step explanations of essential
concepts, practical examples, and self-assessment questions, this
book will help you understand the power of AI and give you an
overview of the relevant out-of-the-box models. You'll learn about
cognitive AI in the context of RPA, the basics of machine learning,
and how to apply cognitive automation within the development
lifecycle. You'll then put your skills to test by building three
use cases with UiPath Document Understanding, UiPath AI Center, and
Druid. By the end of this AI book, you'll be able to build UiPath
automations with the cognitive capabilities of intelligent document
processing, machine learning, and chatbots, while understanding the
development lifecycle. What you will learn Discover how to bridge
the gap between RPA and cognitive automation Understand how to
configure, deploy, and maintain ML models in UiPath Explore OOTB
models to manage documents, chats, emails, and more Prepare test
data and test cases for user acceptance testing (UAT) Build a
UiPath automation to act upon Druid responses Find out how to
connect custom models to RPA Who this book is forAI Engineers and
RPA developers who want to upskill and deploy out-of-the-box models
using UiPath's AI capabilities will find this guide useful. A basic
understanding of robotic process automation and machine learning
will be beneficial but not mandatory to get started with this
UiPath book.
Do you enjoy completing puzzles? Perhaps one of the most
challenging (yet rewarding) puzzles is delivering a successful data
warehouse suitable for data mining and analytics. The Analytical
Puzzle describes an unbiased, practical, and comprehensive approach
to building a data warehouse which will lead to an increased level
of business intelligence within your organisation. New technologies
continuously impact this approach and therefore this book explains
how to leverage big data, cloud computing, data warehouse
appliances, data mining, predictive analytics, data visualisation
and mobile devices. This book describes an unbiased, practical, and
comprehensive approach to building a data warehouse which will lead
to an increased level of business intelligence within your
organisation. New technologies continuously impact this approach
and therefore this book explains how to leverage big data, cloud
computing, data warehouse appliances, data mining, predictive
analytics, data visualisation and mobile devices.
Understand the fundamentals of Kubernetes deployment on Azure with
a learn-by-doing approach Key Features Get to grips with the
fundamentals of containers and Kubernetes Deploy containerized
applications using the Kubernetes platform Learn how you can scale
your workloads and secure your application running in Azure
Kubernetes Service Book DescriptionContainers and Kubernetes
containers facilitate cloud deployments and application development
by enabling efficient versioning with improved security and
portability. With updated chapters on role-based access control,
pod identity, storing secrets, and network security in AKS, this
third edition begins by introducing you to containers, Kubernetes,
and Azure Kubernetes Service (AKS), and guides you through
deploying an AKS cluster in different ways. You will then delve
into the specifics of Kubernetes by deploying a sample guestbook
application on AKS and installing complex Kubernetes apps using
Helm. With the help of real-world examples, you'll also get to
grips with scaling your applications and clusters. As you advance,
you'll learn how to overcome common challenges in AKS and secure
your applications with HTTPS. You will also learn how to secure
your clusters and applications in a dedicated section on security.
In the final section, you'll learn about advanced integrations,
which give you the ability to create Azure databases and run
serverless functions on AKS as well as the ability to integrate AKS
with a continuous integration and continuous delivery (CI/CD)
pipeline using GitHub Actions. By the end of this Kubernetes book,
you will be proficient in deploying containerized workloads on
Microsoft Azure with minimal management overhead. What you will
learn Plan, configure, and run containerized applications in
production. Use Docker to build applications in containers and
deploy them on Kubernetes. Monitor the AKS cluster and the
application. Monitor your infrastructure and applications in
Kubernetes using Azure Monitor. Secure your cluster and
applications using Azure-native security tools. Connect an app to
the Azure database. Store your container images securely with Azure
Container Registry. Install complex Kubernetes applications using
Helm. Integrate Kubernetes with multiple Azure PaaS services, such
as databases, Azure Security Center, and Functions. Use GitHub
Actions to perform continuous integration and continuous delivery
to your cluster. Who this book is forIf you are an aspiring DevOps
professional, system administrator, developer, or site reliability
engineer interested in learning how to get the most out of
containers and Kubernetes, then this book is for you.
Nearly 80 recipes to help you collect and transform data from
multiple sources into a single data source, making it way easier to
perform analytics on the data Key Features Build data pipelines
from scratch and find solutions to common data engineering problems
Learn how to work with Azure Data Factory, Data Lake, Databricks,
and Synapse Analytics Monitor and maintain your data engineering
pipelines using Log Analytics, Azure Monitor, and Azure Purview
Book DescriptionThe famous quote 'Data is the new oil' seems more
true every day as the key to most organizations' long-term success
lies in extracting insights from raw data. One of the major
challenges organizations face in leveraging value out of data is
building performant data engineering pipelines for data
visualization, ingestion, storage, and processing. This second
edition of the immensely successful book by Ahmad Osama brings to
you several recent enhancements in Azure data engineering and
shares approximately 80 useful recipes covering common scenarios in
building data engineering pipelines in Microsoft Azure. You'll
explore recipes from Azure Synapse Analytics workspaces Gen 2 and
get to grips with Synapse Spark pools, SQL Serverless pools,
Synapse integration pipelines, and Synapse data flows. You'll also
understand Synapse SQL Pool optimization techniques in this second
edition. Besides Synapse enhancements, you'll discover helpful tips
on managing Azure SQL Database and learn about security, high
availability, and performance monitoring. Finally, the book takes
you through overall data engineering pipeline management, focusing
on monitoring using Log Analytics and tracking data lineage using
Azure Purview. By the end of this book, you'll be able to build
superior data engineering pipelines along with having an invaluable
go-to guide. What you will learn Process data using Azure
Databricks and Azure Synapse Analytics Perform data transformation
using Azure Synapse data flows Perform common administrative tasks
in Azure SQL Database Build effective Synapse SQL pools which can
be consumed by Power BI Monitor Synapse SQL and Spark pools using
Log Analytics Track data lineage using Microsoft Purview
integration with pipelines Who this book is forThis book is for
data engineers, data architects, database administrators, and data
professionals who want to get well versed with the Azure data
services for building data pipelines. Basic understanding of cloud
and data engineering concepts will help in getting the most out of
this book.
|
|