|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Get to grips with building reliable, scalable, and maintainable
database solutions for enterprises and production databases Key
Features Implement PostgreSQL 13 features to perform end-to-end
modern database management Design, manage, and build enterprise
database solutions using a unique recipe-based approach Solve
common and not-so-common challenges faced while working to achieve
optimal database performance Book DescriptionPostgreSQL has become
the most advanced open source database on the market. This book
follows a step-by-step approach, guiding you effectively in
deploying PostgreSQL in production environments. The book starts
with an introduction to PostgreSQL and its architecture. You'll
cover common and not-so-common challenges faced while designing and
managing the database. Next, the book focuses on backup and
recovery strategies to ensure your database is steady and achieves
optimal performance. Throughout the book, you'll address key
challenges such as maintaining reliability, data integrity, a
fault-tolerant environment, a robust feature set, extensibility,
consistency, and authentication. Moving ahead, you'll learn how to
manage a PostgreSQL cluster and explore replication features for
high availability. Later chapters will assist you in building a
secure PostgreSQL server, along with covering recipes for
encrypting data in motion and data at rest. Finally, you'll not
only discover how to tune your database for optimal performance but
also understand ways to monitor and manage maintenance activities,
before learning how to perform PostgreSQL upgrades during downtime.
By the end of this book, you'll be well-versed with the essential
PostgreSQL 13 features to build enterprise relational databases.
What you will learn Understand logical and physical backups in
Postgres Demonstrate the different types of replication methods
possible with PostgreSQL today Set up a high availability cluster
that provides seamless automatic failover for applications Secure a
PostgreSQL encryption through authentication, authorization, and
auditing Analyze the live and historic activity of a PostgreSQL
server Understand how to monitor critical services in Postgres 13
Manage maintenance activities and performance tuning of a
PostgreSQL cluster Who this book is forThis PostgreSQL book is for
database architects, database developers and administrators, or
anyone who wants to become well-versed with PostgreSQL 13 features
to plan, manage, and design efficient database solutions. Prior
experience with the PostgreSQL database and SQL language is
expected.
Get to grips with automated machine learning and adopt a hands-on
approach to AutoML implementation and associated methodologies Key
Features Get up to speed with AutoML using OSS, Azure, AWS, GCP, or
any platform of your choice Eliminate mundane tasks in data
engineering and reduce human errors in machine learning models Find
out how you can make machine learning accessible for all users to
promote decentralized processes Book DescriptionEvery machine
learning engineer deals with systems that have hyperparameters, and
the most basic task in automated machine learning (AutoML) is to
automatically set these hyperparameters to optimize performance.
The latest deep neural networks have a wide range of
hyperparameters for their architecture, regularization, and
optimization, which can be customized effectively to save time and
effort. This book reviews the underlying techniques of automated
feature engineering, model and hyperparameter tuning,
gradient-based approaches, and much more. You'll discover different
ways of implementing these techniques in open source tools and then
learn to use enterprise tools for implementing AutoML in three
major cloud service providers: Microsoft Azure, Amazon Web Services
(AWS), and Google Cloud Platform. As you progress, you'll explore
the features of cloud AutoML platforms by building machine learning
models using AutoML. The book will also show you how to develop
accurate models by automating time-consuming and repetitive tasks
in the machine learning development lifecycle. By the end of this
machine learning book, you'll be able to build and deploy AutoML
models that are not only accurate, but also increase productivity,
allow interoperability, and minimize feature engineering tasks.
What you will learn Explore AutoML fundamentals, underlying
methods, and techniques Assess AutoML aspects such as algorithm
selection, auto featurization, and hyperparameter tuning in an
applied scenario Find out the difference between cloud and
operations support systems (OSS) Implement AutoML in enterprise
cloud to deploy ML models and pipelines Build explainable AutoML
pipelines with transparency Understand automated feature
engineering and time series forecasting Automate data science
modeling tasks to implement ML solutions easily and focus on more
complex problems Who this book is forCitizen data scientists,
machine learning developers, artificial intelligence enthusiasts,
or anyone looking to automatically build machine learning models
using the features offered by open source tools, Microsoft Azure
Machine Learning, AWS, and Google Cloud Platform will find this
book useful. Beginner-level knowledge of building ML models is
required to get the best out of this book. Prior experience in
using Enterprise cloud is beneficial.
Unsere Umwelt wird mehr und mehr gepragt durch Automatische
Datenverarbeitung. Weil die technische Seite dabei von der
Elektronik beherrscht wird, verwendet man durch weg den Begriff
Elektronische Datenverarbeitung (EDV). Es gibt eigentlich kaum noch
einen Bereich im menschlichen Zusammenleben, der nicht mit EDV
zumindest in Be ruhrung kame. Beinahe taglich begegnet uns EDV
direkt oder indirekt - bewulSt und unbewulSt - beispielsweise in
Form von Abrechnungen, Benachrichtigungen, beim Ein kauf, in der
Verwaltung und vor allem im Berufsleben. Das ist der aktuelle Bezug
fur das vorliegende Lehrbuch. Und nicht etwa modische Aspekte ver:
anlassen zu der Aufforderung, Prinzipien und Einzelheiten der EDV
zu studieren. Fur den angehenden Techniker und Ingenieur ergibt
sich eine Notwendigkeit fur ein vertieftes Studium daraus, daIS man
bei der Berufsausubung von ihm Verstandnis fur oder gar Detail
kenntnisse uber EDV erwartet. Das wird besonders unabwendbar, wenn
es sich um einen Absolventen mit elektrotechnischer
Spl'/ialausbildung handelt. Der Bedeutung der EDV wird inzwischen
mit einer Vielzahl von Abhandlungen und Lehr buchern gerecht. Das
errichtete N iveau reicht dabei von einfachsten Darstellungen fur
Jedermann bis zu wissenschaftlichen Arbeiten, die nur von
Spezialisten lesbar sind."
Comprehensive recipes to give you valuable insights on
Transformers, Reinforcement Learning, and more Key Features Deep
Learning solutions from Kaggle Masters and Google Developer Experts
Get to grips with the fundamentals including variables, matrices,
and data sources Learn advanced techniques to make your algorithms
faster and more accurate Book DescriptionThe independent recipes in
Machine Learning Using TensorFlow Cookbook will teach you how to
perform complex data computations and gain valuable insights into
your data. Dive into recipes on training models, model evaluation,
sentiment analysis, regression analysis, artificial neural
networks, and deep learning - each using Google's machine learning
library, TensorFlow. This cookbook covers the fundamentals of the
TensorFlow library, including variables, matrices, and various data
sources. You'll discover real-world implementations of Keras and
TensorFlow and learn how to use estimators to train linear models
and boosted trees, both for classification and regression. Explore
the practical applications of a variety of deep learning
architectures, such as recurrent neural networks and Transformers,
and see how they can be used to solve computer vision and natural
language processing (NLP) problems. With the help of this book, you
will be proficient in using TensorFlow, understand deep learning
from the basics, and be able to implement machine learning
algorithms in real-world scenarios. What you will learn Take
TensorFlow into production Implement and fine-tune Transformer
models for various NLP tasks Apply reinforcement learning
algorithms using the TF-Agents framework Understand linear
regression techniques and use Estimators to train linear models
Execute neural networks and improve predictions on tabular data
Master convolutional neural networks and recurrent neural networks
through practical recipes Who this book is forIf you are a data
scientist or a machine learning engineer, and you want to skip
detailed theoretical explanations in favor of building
production-ready machine learning models using TensorFlow, this
book is for you. Basic familiarity with Python, linear algebra,
statistics, and machine learning is necessary to make the most out
of this book.
Get to grips with building and productionizing end-to-end big data
solutions in Azure and learn best practices for working with large
datasets Key Features Integrate with Azure Synapse Analytics,
Cosmos DB, and Azure HDInsight Kafka Cluster to scale and analyze
your projects and build pipelines Use Databricks SQL to run ad hoc
queries on your data lake and create dashboards Productionize a
solution using CI/CD for deploying notebooks and Azure Databricks
Service to various environments Book DescriptionAzure Databricks is
a unified collaborative platform for performing scalable analytics
in an interactive environment. The Azure Databricks Cookbook
provides recipes to get hands-on with the analytics process,
including ingesting data from various batch and streaming sources
and building a modern data warehouse. The book starts by teaching
you how to create an Azure Databricks instance within the Azure
portal, Azure CLI, and ARM templates. You'll work through clusters
in Databricks and explore recipes for ingesting data from sources,
including files, databases, and streaming sources such as Apache
Kafka and EventHub. The book will help you explore all the features
supported by Azure Databricks for building powerful end-to-end data
pipelines. You'll also find out how to build a modern data
warehouse by using Delta tables and Azure Synapse Analytics. Later,
you'll learn how to write ad hoc queries and extract meaningful
insights from the data lake by creating visualizations and
dashboards with Databricks SQL. Finally, you'll deploy and
productionize a data pipeline as well as deploy notebooks and Azure
Databricks service using continuous integration and continuous
delivery (CI/CD). By the end of this Azure book, you'll be able to
use Azure Databricks to streamline different processes involved in
building data-driven apps. What you will learn Read and write data
from and to various Azure resources and file formats Build a modern
data warehouse with Delta Tables and Azure Synapse Analytics
Explore jobs, stages, and tasks and see how Spark lazy evaluation
works Handle concurrent transactions and learn performance
optimization in Delta tables Learn Databricks SQL and create
real-time dashboards in Databricks SQL Integrate Azure DevOps for
version control, deploying, and productionizing solutions with
CI/CD pipelines Discover how to use RBAC and ACLs to restrict data
access Build end-to-end data processing pipeline for near real-time
data analytics Who this book is forThis recipe-based book is for
data scientists, data engineers, big data professionals, and
machine learning engineers who want to perform data analytics on
their applications. Prior experience of working with Apache Spark
and Azure is necessary to get the most out of this book.
Das Arbeitsergebnis des Studienkreises Dr. Parli wurde zunachst in
Form eines internen Arbeitsberichtes den Mitgliedern des
Foerderervereins des Betriebswirtschaftlichen Instituts fur
Organisation und Automation an der Universitat zu Koeln (BIFOA) zur
Verfugung gestellt. Die sich daraus er- gebende Diskussion zeigte,
dass die Probleme der Istaufnahme bei automati- sierter
Datenverarbeitung (ADV) nach wie vor in Wissenschaft und Praxis von
hoher Aktualitat sind, so dass mir nunmehr - nicht zuletzt aufgrund
zahlreicher Anfragen aus der Wirtschaftspraxis - eine Publikation
in der Instituts-Schriftenreihe sinnvoll erscheint. Damit werden
die Ergebnisse einem groesseren Interessentenkreis zuganglich und
koennen insbesondere mittleren und kleineren Unternehmungen und
Einheiten der oeffentlichen Verwaltung, die aufgrund des
vielfaltigen Angebots unterschiedlicher Computergroessen ebenfalls
in den Kreis der Anwender von Anlagen der automatisierten
Datenverarbeitung geruckt sind, als Orientierungshilfe dienen. In
der vorliegenden Arbeit werden die Erfahrungen von
Wirtschaftsprakti- kern aus Grossunternehmungen verschiedener
Branchen sowie der oeffent- lichen Verwaltung systematisiert und
auf ihre Allgemeingultigkeit unter- sucht. Es handelt sich um
Erfahrungen, die aus Unternehmungen stammen, die sich aufgrund
ihres Geschaftsumfanges schon fruhzeitig zum Einsatz von
ADV-Anlagen entschliessen mussten und die teilweise - entsprechend
den Stufen der technischen und organisatorischen Entwicklung - mit
den Istaufnahmeproblemen unterschiedlichster Art konfrontiert
wurden. Das Hauptanliegen der Schrift besteht nicht in einer rein
theoretischen Durchdringung des Problemkreises Istaufnahme und
Automatisierte Daten- verarbeitung, sondern in einer
praxisbezogenen Aufbereitung und Systema- tisierung empirischen
Wissens auf diesem Gebiet.
Learn how to gain insights from your data as well as machine
learning and become a presentation pro who can create interactive
dashboards Key Features Enhance your presentation skills by
implementing engaging data storytelling and visualization
techniques Learn the basics of machine learning and easily apply
machine learning models to your data Improve productivity by
automating your data processes Book DescriptionData Analytics Made
Easy is an accessible beginner's guide for anyone working with
data. The book interweaves four key elements: Data visualizations
and storytelling - Tired of people not listening to you and
ignoring your results? Don't worry; chapters 7 and 8 show you how
to enhance your presentations and engage with your managers and
co-workers. Learn to create focused content with a well-structured
story behind it to captivate your audience. Automating your data
workflows - Improve your productivity by automating your data
analysis. This book introduces you to the open-source platform,
KNIME Analytics Platform. You'll see how to use this no-code and
free-to-use software to create a KNIME workflow of your data
processes just by clicking and dragging components. Machine
learning - Data Analytics Made Easy describes popular machine
learning approaches in a simplified and visual way before
implementing these machine learning models using KNIME. You'll not
only be able to understand data scientists' machine learning
models; you'll be able to challenge them and build your own.
Creating interactive dashboards - Follow the book's simple
methodology to create professional-looking dashboards using
Microsoft Power BI, giving users the capability to slice and dice
data and drill down into the results. What you will learn
Understand the potential of data and its impact on your business
Import, clean, transform, combine data feeds, and automate your
processes Influence business decisions by learning to create
engaging presentations Build real-world models to improve
profitability, create customer segmentation, automate and improve
data reporting, and more Create professional-looking and
business-centric visuals and dashboards Open the lid on the black
box of AI and learn about and implement supervised and unsupervised
machine learning models Who this book is forThis book is for
beginners who work with data and those who need to know how to
interpret their business/customer data. The book also covers the
high-level concepts of data workflows, machine learning, data
storytelling, and visualizations, which are useful for managers. No
previous math, statistics, or computer science knowledge is
required.
1m Oktober 1968 trafen Klinikchefs mit Spezialisten aus dem Bereich
der Hoch- schulen und der Computer-lndustrie in Reinhartshausen
zusammen, urn innerhalb der raschen Entwicklung der sogenannten
zweiten technischen Revolution den Trend der modernen Medizin
aufzusptiren. Ais Diskussionsgrundlage dienten ausgewillllte Refe-
rate. Ein tiberblick tiber den Verlauf dieser Tagung Ui.l3t es
niitzlich erscheinen, die Thematik einem grol3eren Kreis
zugiinglich zu machen. So haben wir uns entschlossen, die
Manuskripte der Autoren zu einem Werk zusammenzuschliel3en. Die
technischen Grundlagen der elektronischen Datenverarbeitung sollen
dabei allerdings unbertick- sichtigt bleiben. Die Durchsicht der
Beitrage mag den Eindruck erwecken, dal3 anscheinend bereits
zurtickliegende Entwicklungsphasen mit phantasievollen Forderungen
an die Zukunft inhomogen zusammengestellt seien. Aber es kommt uns
darauf an, in der bestaunens- wert en Schnelligkeit, mit der sich
eine elektronische Informationsverarbeitung - oder besser
formuliert - die moderne Wissenschaft der Informatik vollzieht, den
gegen- wartigen Zustand in der Medizin aufzuzeigen und in ihm an
den Einzelheiten die Ten- denzen darzustellen, die sich bald aus
den ursprtinglichen mechanischen Formen der Erfassung und
Verarbeitung von Daten, bald aus dem Bild der Zukunft deutlicher
ab- zeichnen. Wir hegen die Hoffnung, dal3 auf dieser Basis sich
pragende Konzeptionen fUr die Gestaltung der Zukunft ergeben. Herrn
Kollegen NORBERT EICHENSEHER danken wir fUr seine wertvolle Unter-
stiltzung bei den Korrekturen und der Abfassung des
Sachverzeichnisses.
Reinforce your understanding of data science and data analysis from
a statistical perspective to extract meaningful insights from your
data using Python programming Key Features Work your way through
the entire data analysis pipeline with statistics concerns in mind
to make reasonable decisions Understand how various data science
algorithms function Build a solid foundation in statistics for data
science and machine learning using Python-based examples Book
DescriptionStatistics remain the backbone of modern analysis tasks,
helping you to interpret the results produced by data science
pipelines. This book is a detailed guide covering the math and
various statistical methods required for undertaking data science
tasks. The book starts by showing you how to preprocess data and
inspect distributions and correlations from a statistical
perspective. You'll then get to grips with the fundamentals of
statistical analysis and apply its concepts to real-world datasets.
As you advance, you'll find out how statistical concepts emerge
from different stages of data science pipelines, understand the
summary of datasets in the language of statistics, and use it to
build a solid foundation for robust data products such as
explanatory models and predictive models. Once you've uncovered the
working mechanism of data science algorithms, you'll cover
essential concepts for efficient data collection, cleaning, mining,
visualization, and analysis. Finally, you'll implement statistical
methods in key machine learning tasks such as classification,
regression, tree-based methods, and ensemble learning. By the end
of this Essential Statistics for Non-STEM Data Analysts book,
you'll have learned how to build and present a self-contained,
statistics-backed data product to meet your business goals. What
you will learn Find out how to grab and load data into an analysis
environment Perform descriptive analysis to extract meaningful
summaries from data Discover probability, parameter estimation,
hypothesis tests, and experiment design best practices Get to grips
with resampling and bootstrapping in Python Delve into statistical
tests with variance analysis, time series analysis, and A/B test
examples Understand the statistics behind popular machine learning
algorithms Answer questions on statistics for data scientist
interviews Who this book is forThis book is an entry-level guide
for data science enthusiasts, data analysts, and anyone starting
out in the field of data science and looking to learn the essential
statistical concepts with the help of simple explanations and
examples. If you're a developer or student with a non-mathematical
background, you'll find this book useful. Working knowledge of the
Python programming language is required.
This report discusses the role computer-assisted personal
interviewing (CAPI) can play in transforming survey data collection
to allow better monitoring of the Sustainable Development Goals.
The first part of this publication provides rigorous quantitative
evidence on why CAPI is a better alternative to the traditional pen
and paper interviewing method, particularly in the context of
nationally representative surveys. The second part discusses the
benefits of delivering CAPI training to statisticians using the
popular massive online open course format. The final part provides
a summary of existing CAPI platforms and offers some preliminary
advice for NSOs to consider when selecting a CAPI platform for
their institution. This is a Special Supplement to the Key
Indicators for Asia and the Pacific 2019.
Als Stahl bezeichnet man heute alle Eisenlegierungen - mit Ausnahme
der nicht schmiedbaren hochkohlenstoffhaltigen Gu sorten wie
Grauguli, Hartguf und Ternperguf - ohne Riicksichr auf ihre
Eigenschaften. Friiher wurde als wesentliches Merkmal des Stahles
die Hartbarkeit angesehen. Es gibt aber eine ganze Reihe von
Stahlen, die sich nicht harten lassen, die durch das Abschrecken
aus hohen Temperaturen im Gegenteil sogar weicher, zaher werden.
Edelstdble werden vielfach solche Stahle genannt, die au er mit
Kohlenstoff auch noch mit anderen Grundstoffen, z. B. mit Chrom,
Nickel, Wolfram, Vanadin usw. legiert sind. Diese Begriffsbestim-
mung ist jedoch nicht erschopfend und auch anfechtbar, Denn man
wird einen reinen Kohlenstoffstahl, der sorgfaltig erzeugt und auf
dem ganzen Wege der Herstellung - vom Gu bis zum Versand - immer
wieder gewissenhaft gepriift worden ist, zweifellos auch zu den
Edelstahlen rechnen miissen. Andererseits enthalten manchmal
Massenstahle - auch als unbeabsichtigte Verunreinigungen - ge-
wisse Mengen von Legierungselementen. Das Richtige wird man
treffen, wenn man die bei den grofsen Hiittenwerken in grofien
Mengen erzeugten billigen Stahle als .Mas- senstahle bezeichnet,
die von einem Edelstahlwerk mit Sorgfalt und unter scharfster
Kontrolle hergestellten Stahle dagegen als Edelstahle. Die billigen
Massenstahle werden meistens nach Festigkeit ver- kauft, die
Edelstahle dagegen nach dem Verwendungszweck und unter einer
Markenbezeichnung.
Get up to speed with the new features added to Microsoft SQL Server
2019 Analysis Services and create models to support your business
Key Features Explore tips and tricks to design, develop, and
optimize end-to-end data analytics solutions using Microsoft's
technologies Learn tabular modeling and multi-dimensional cube
design development using real-world examples Implement Analysis
Services to help you make productive business decisions Book
DescriptionSQL Server Analysis Services (SSAS) continues to be a
leading enterprise-scale toolset, enabling customers to deliver
data and analytics across large datasets with great performance.
This book will help you understand MS SQL Server 2019's new
features and improvements, especially when it comes to SSAS. First,
you'll cover a quick overview of SQL Server 2019, learn how to
choose the right analytical model to use, and understand their key
differences. You'll then explore how to create a multi-dimensional
model with SSAS and expand on that model with MDX. Next, you'll
create and deploy a tabular model using Microsoft Visual Studio and
Management Studio. You'll learn when and how to use both tabular
and multi-dimensional model types, how to deploy and configure your
servers to support them, and design principles that are relevant to
each model. The book comes packed with tips and tricks to build
measures, optimize your design, and interact with models using
Excel and Power BI. All this will help you visualize data to gain
useful insights and make better decisions. Finally, you'll discover
practices and tools for securing and maintaining your models once
they are deployed. By the end of this MS SQL Server book, you'll be
able to choose the right model and build and deploy it to support
the analytical needs of your business. What you will learn
Determine the best analytical model using SSAS Cover the core
aspects involved in MDX, including writing your first query
Implement calculated tables and calculation groups (new in version
2019) in DAX Create and deploy tabular and multi-dimensional models
on SQL 2019 Connect and create data visualizations using Excel and
Power BI Implement row-level and other data security methods with
tabular and multi-dimensional models Explore essential concepts and
techniques to scale, manage, and optimize your SSAS solutions Who
this book is forThis Microsoft SQL Server book is for BI
professionals and data analysts who are looking for a practical
guide to creating and maintaining tabular and multi-dimensional
models using SQL Server 2019 Analysis Services. A basic working
knowledge of BI solutions such as Power BI and database querying is
required.
Leverage the Azure analytics platform's key analytics services to
deliver unmatched intelligence for your data Key Features Learn to
ingest, prepare, manage, and serve data for immediate business
requirements Bring enterprise data warehousing and big data
analytics together to gain insights from your data Develop
end-to-end analytics solutions using Azure Synapse Book
DescriptionAzure Synapse Analytics, which Microsoft describes as
the next evolution of Azure SQL Data Warehouse, is a limitless
analytics service that brings enterprise data warehousing and big
data analytics together. With this book, you'll learn how to
discover insights from your data effectively using this platform.
The book starts with an overview of Azure Synapse Analytics, its
architecture, and how it can be used to improve business
intelligence and machine learning capabilities. Next, you'll go on
to choose and set up the correct environment for your business
problem. You'll also learn a variety of ways to ingest data from
various sources and orchestrate the data using transformation
techniques offered by Azure Synapse. Later, you'll explore how to
handle both relational and non-relational data using the SQL
language. As you progress, you'll perform real-time streaming and
execute data analysis operations on your data using various
languages, before going on to apply ML techniques to derive
accurate and granular insights from data. Finally, you'll discover
how to protect sensitive data in real time by using security and
privacy features. By the end of this Azure book, you'll be able to
build end-to-end analytics solutions while focusing on data prep,
data management, data warehousing, and AI tasks. What you will
learn Explore the necessary considerations for data ingestion and
orchestration while building analytical pipelines Understand
pipelines and activities in Synapse pipelines and use them to
construct end-to-end data-driven workflows Query data using various
coding languages on Azure Synapse Focus on Synapse SQL and Synapse
Spark Manage and monitor resource utilization and query activity in
Azure Synapse Connect Power BI workspaces with Azure Synapse and
create or modify reports directly from Synapse Studio Create and
manage IP firewall rules in Azure Synapse Who this book is forThis
book is for data architects, data scientists, data engineers, and
business analysts who are looking to get up and running with the
Azure Synapse Analytics platform. Basic knowledge of data
warehousing will be beneficial to help you understand the concepts
covered in this book more effectively.
|
|