|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Learn the fundamentals of data science with Python by analyzing
real datasets and solving problems using pandas Key Features *
Learn how to apply data retrieval, transformation, visualization,
and modeling techniques using pandas * Become highly efficient in
unlocking deeper insights from your data, including databases, web
data, and more * Build your experience and confidence with hands-on
exercises and activities Book Description The Pandas Workshop will
teach you how to be more productive with data and generate real
business insights to inform your decision-making. You will be
guided through real-world data science problems and shown how to
apply key techniques in the context of realistic examples and
exercises. Engaging activities will then challenge you to apply
your new skills in a way that prepares you for real data science
projects. You'll see how experienced data scientists tackle a wide
range of problems using data analysis with pandas. Unlike other
Python books, which focus on theory and spend too long on dry,
technical explanations, this workshop is designed to quickly get
you to write clean code and build your understanding through
hands-on practice. As you work through this Python pandas book,
you'll tackle various real-world scenarios, such as using an air
quality dataset to understand the pattern of nitrogen dioxide
emissions in a city, as well as analyzing transportation data to
improve bus transportation services. By the end of this data
analytics book, you'll have the knowledge, skills, and confidence
you need to solve your own challenging data science problems with
pandas. What you will learn * Access and load data from different
sources using pandas * Work with a range of data types and
structures to understand your data * Perform data transformation to
prepare it for analysis * Use Matplotlib for data visualization to
create a variety of plots * Create data models to find
relationships and test hypotheses * Manipulate time-series data to
perform date-time calculations * Optimize your code to ensure more
efficient business data analysis Who This Book Is For This data
analysis book is for anyone with prior experience working with the
Python programming language who wants to learn the fundamentals of
data analysis with pandas. Previous knowledge of pandas is not
necessary.
Get to grips with building reliable, scalable, and maintainable
database solutions for enterprises and production databases Key
Features Implement PostgreSQL 13 features to perform end-to-end
modern database management Design, manage, and build enterprise
database solutions using a unique recipe-based approach Solve
common and not-so-common challenges faced while working to achieve
optimal database performance Book DescriptionPostgreSQL has become
the most advanced open source database on the market. This book
follows a step-by-step approach, guiding you effectively in
deploying PostgreSQL in production environments. The book starts
with an introduction to PostgreSQL and its architecture. You'll
cover common and not-so-common challenges faced while designing and
managing the database. Next, the book focuses on backup and
recovery strategies to ensure your database is steady and achieves
optimal performance. Throughout the book, you'll address key
challenges such as maintaining reliability, data integrity, a
fault-tolerant environment, a robust feature set, extensibility,
consistency, and authentication. Moving ahead, you'll learn how to
manage a PostgreSQL cluster and explore replication features for
high availability. Later chapters will assist you in building a
secure PostgreSQL server, along with covering recipes for
encrypting data in motion and data at rest. Finally, you'll not
only discover how to tune your database for optimal performance but
also understand ways to monitor and manage maintenance activities,
before learning how to perform PostgreSQL upgrades during downtime.
By the end of this book, you'll be well-versed with the essential
PostgreSQL 13 features to build enterprise relational databases.
What you will learn Understand logical and physical backups in
Postgres Demonstrate the different types of replication methods
possible with PostgreSQL today Set up a high availability cluster
that provides seamless automatic failover for applications Secure a
PostgreSQL encryption through authentication, authorization, and
auditing Analyze the live and historic activity of a PostgreSQL
server Understand how to monitor critical services in Postgres 13
Manage maintenance activities and performance tuning of a
PostgreSQL cluster Who this book is forThis PostgreSQL book is for
database architects, database developers and administrators, or
anyone who wants to become well-versed with PostgreSQL 13 features
to plan, manage, and design efficient database solutions. Prior
experience with the PostgreSQL database and SQL language is
expected.
Unsere Umwelt wird mehr und mehr gepragt durch Automatische
Datenverarbeitung. Weil die technische Seite dabei von der
Elektronik beherrscht wird, verwendet man durch weg den Begriff
Elektronische Datenverarbeitung (EDV). Es gibt eigentlich kaum noch
einen Bereich im menschlichen Zusammenleben, der nicht mit EDV
zumindest in Be ruhrung kame. Beinahe taglich begegnet uns EDV
direkt oder indirekt - bewulSt und unbewulSt - beispielsweise in
Form von Abrechnungen, Benachrichtigungen, beim Ein kauf, in der
Verwaltung und vor allem im Berufsleben. Das ist der aktuelle Bezug
fur das vorliegende Lehrbuch. Und nicht etwa modische Aspekte ver:
anlassen zu der Aufforderung, Prinzipien und Einzelheiten der EDV
zu studieren. Fur den angehenden Techniker und Ingenieur ergibt
sich eine Notwendigkeit fur ein vertieftes Studium daraus, daIS man
bei der Berufsausubung von ihm Verstandnis fur oder gar Detail
kenntnisse uber EDV erwartet. Das wird besonders unabwendbar, wenn
es sich um einen Absolventen mit elektrotechnischer
Spl'/ialausbildung handelt. Der Bedeutung der EDV wird inzwischen
mit einer Vielzahl von Abhandlungen und Lehr buchern gerecht. Das
errichtete N iveau reicht dabei von einfachsten Darstellungen fur
Jedermann bis zu wissenschaftlichen Arbeiten, die nur von
Spezialisten lesbar sind."
Learn how to bring your data to life with this hands-on guide to
visual analytics with Tableau Key Features Master the fundamentals
of Tableau Desktop and Tableau Prep Learn how to explore, analyze,
and present data to provide business insights Build your experience
and confidence with hands-on exercises and activities Book
DescriptionLearning Tableau has never been easier, thanks to this
practical introduction to storytelling with data. The Tableau
Workshop breaks down the analytical process into five steps: data
preparation, data exploration, data analysis, interactivity, and
distribution of dashboards. Each stage is addressed with a clear
walkthrough of the key tools and techniques you'll need, as well as
engaging real-world examples, meaningful data, and practical
exercises to give you valuable hands-on experience. As you work
through the book, you'll learn Tableau step by step, studying how
to clean, shape, and combine data, as well as how to choose the
most suitable charts for any given scenario. You'll load data from
various sources and formats, perform data engineering to create new
data that delivers deeper insights, and create interactive
dashboards that engage end-users. All concepts are introduced with
clear, simple explanations and demonstrated through realistic
example scenarios. You'll simulate real-world data science projects
with use cases such as traffic violations, urban populations,
coffee store sales, and air travel delays. By the end of this
Tableau book, you'll have the skills and knowledge to confidently
present analytical results and make data-driven decisions. What you
will learn Become an effective user of Tableau Prep and Tableau
Desktop Load, combine, and process data for analysis and
visualization Understand different types of charts and when to use
them Perform calculations to engineer new data and unlock hidden
insights Add interactivity to your visualizations to make them more
engaging Create holistic dashboards that are detailed and
user-friendly Who this book is forThis book is for anyone who wants
to get started on visual analytics with Tableau. If you're new to
Tableau, this Workshop will get you up and running. If you already
have some experience in Tableau, this book will help fill in any
gaps, consolidate your understanding, and give you extra practice
of key tools.
Das Arbeitsergebnis des Studienkreises Dr. Parli wurde zunachst in
Form eines internen Arbeitsberichtes den Mitgliedern des
Foerderervereins des Betriebswirtschaftlichen Instituts fur
Organisation und Automation an der Universitat zu Koeln (BIFOA) zur
Verfugung gestellt. Die sich daraus er- gebende Diskussion zeigte,
dass die Probleme der Istaufnahme bei automati- sierter
Datenverarbeitung (ADV) nach wie vor in Wissenschaft und Praxis von
hoher Aktualitat sind, so dass mir nunmehr - nicht zuletzt aufgrund
zahlreicher Anfragen aus der Wirtschaftspraxis - eine Publikation
in der Instituts-Schriftenreihe sinnvoll erscheint. Damit werden
die Ergebnisse einem groesseren Interessentenkreis zuganglich und
koennen insbesondere mittleren und kleineren Unternehmungen und
Einheiten der oeffentlichen Verwaltung, die aufgrund des
vielfaltigen Angebots unterschiedlicher Computergroessen ebenfalls
in den Kreis der Anwender von Anlagen der automatisierten
Datenverarbeitung geruckt sind, als Orientierungshilfe dienen. In
der vorliegenden Arbeit werden die Erfahrungen von
Wirtschaftsprakti- kern aus Grossunternehmungen verschiedener
Branchen sowie der oeffent- lichen Verwaltung systematisiert und
auf ihre Allgemeingultigkeit unter- sucht. Es handelt sich um
Erfahrungen, die aus Unternehmungen stammen, die sich aufgrund
ihres Geschaftsumfanges schon fruhzeitig zum Einsatz von
ADV-Anlagen entschliessen mussten und die teilweise - entsprechend
den Stufen der technischen und organisatorischen Entwicklung - mit
den Istaufnahmeproblemen unterschiedlichster Art konfrontiert
wurden. Das Hauptanliegen der Schrift besteht nicht in einer rein
theoretischen Durchdringung des Problemkreises Istaufnahme und
Automatisierte Daten- verarbeitung, sondern in einer
praxisbezogenen Aufbereitung und Systema- tisierung empirischen
Wissens auf diesem Gebiet.
Many organizations, including government institutions and agencies,
continue to increase their financial investment on information
technology (IT) solutions. Despite these investments, during the
global pandemic, employees and managers are either struggling or
unequipped to use these tools effectively and efficiently for
sustainability, competitive advantage, and decision making. Due to
global pandemics, companies must harness the power of various
digital channels such as big data analytics and artificial
intelligence to better serve their customers and business partners.
Using Information Technology Advancements to Adapt to Global
Pandemics provides insights and understanding on how companies and
organizations are using advances in IT to adapt to global pandemics
such as COVID-19. It explores how the various IT approaches can be
used for strategic purposes. Covering topics such as higher
education institutions, religious organizations, and telework, this
premier reference source is an essential resource for government
officials, business leaders and managers, industry professionals,
IT specialists, policymakers, libraries, academicians, students,
and researchers.
Learn how to gain insights from your data as well as machine
learning and become a presentation pro who can create interactive
dashboards Key Features Enhance your presentation skills by
implementing engaging data storytelling and visualization
techniques Learn the basics of machine learning and easily apply
machine learning models to your data Improve productivity by
automating your data processes Book DescriptionData Analytics Made
Easy is an accessible beginner's guide for anyone working with
data. The book interweaves four key elements: Data visualizations
and storytelling - Tired of people not listening to you and
ignoring your results? Don't worry; chapters 7 and 8 show you how
to enhance your presentations and engage with your managers and
co-workers. Learn to create focused content with a well-structured
story behind it to captivate your audience. Automating your data
workflows - Improve your productivity by automating your data
analysis. This book introduces you to the open-source platform,
KNIME Analytics Platform. You'll see how to use this no-code and
free-to-use software to create a KNIME workflow of your data
processes just by clicking and dragging components. Machine
learning - Data Analytics Made Easy describes popular machine
learning approaches in a simplified and visual way before
implementing these machine learning models using KNIME. You'll not
only be able to understand data scientists' machine learning
models; you'll be able to challenge them and build your own.
Creating interactive dashboards - Follow the book's simple
methodology to create professional-looking dashboards using
Microsoft Power BI, giving users the capability to slice and dice
data and drill down into the results. What you will learn
Understand the potential of data and its impact on your business
Import, clean, transform, combine data feeds, and automate your
processes Influence business decisions by learning to create
engaging presentations Build real-world models to improve
profitability, create customer segmentation, automate and improve
data reporting, and more Create professional-looking and
business-centric visuals and dashboards Open the lid on the black
box of AI and learn about and implement supervised and unsupervised
machine learning models Who this book is forThis book is for
beginners who work with data and those who need to know how to
interpret their business/customer data. The book also covers the
high-level concepts of data workflows, machine learning, data
storytelling, and visualizations, which are useful for managers. No
previous math, statistics, or computer science knowledge is
required.
1m Oktober 1968 trafen Klinikchefs mit Spezialisten aus dem Bereich
der Hoch- schulen und der Computer-lndustrie in Reinhartshausen
zusammen, urn innerhalb der raschen Entwicklung der sogenannten
zweiten technischen Revolution den Trend der modernen Medizin
aufzusptiren. Ais Diskussionsgrundlage dienten ausgewillllte Refe-
rate. Ein tiberblick tiber den Verlauf dieser Tagung Ui.l3t es
niitzlich erscheinen, die Thematik einem grol3eren Kreis
zugiinglich zu machen. So haben wir uns entschlossen, die
Manuskripte der Autoren zu einem Werk zusammenzuschliel3en. Die
technischen Grundlagen der elektronischen Datenverarbeitung sollen
dabei allerdings unbertick- sichtigt bleiben. Die Durchsicht der
Beitrage mag den Eindruck erwecken, dal3 anscheinend bereits
zurtickliegende Entwicklungsphasen mit phantasievollen Forderungen
an die Zukunft inhomogen zusammengestellt seien. Aber es kommt uns
darauf an, in der bestaunens- wert en Schnelligkeit, mit der sich
eine elektronische Informationsverarbeitung - oder besser
formuliert - die moderne Wissenschaft der Informatik vollzieht, den
gegen- wartigen Zustand in der Medizin aufzuzeigen und in ihm an
den Einzelheiten die Ten- denzen darzustellen, die sich bald aus
den ursprtinglichen mechanischen Formen der Erfassung und
Verarbeitung von Daten, bald aus dem Bild der Zukunft deutlicher
ab- zeichnen. Wir hegen die Hoffnung, dal3 auf dieser Basis sich
pragende Konzeptionen fUr die Gestaltung der Zukunft ergeben. Herrn
Kollegen NORBERT EICHENSEHER danken wir fUr seine wertvolle Unter-
stiltzung bei den Korrekturen und der Abfassung des
Sachverzeichnisses.
 |
Hands-on MuleSoft Anypoint Platform Volume 3
- Implement various connectors including Database, File, SOAP, Email, VM, JMS, AMQP, Scripting, SFTP, LDAP, Java and ObjectStore
(Paperback)
Nanda Nachimuthu
|
R1,029
Discovery Miles 10 290
|
Ships in 10 - 15 working days
|
|
This report discusses the role computer-assisted personal
interviewing (CAPI) can play in transforming survey data collection
to allow better monitoring of the Sustainable Development Goals.
The first part of this publication provides rigorous quantitative
evidence on why CAPI is a better alternative to the traditional pen
and paper interviewing method, particularly in the context of
nationally representative surveys. The second part discusses the
benefits of delivering CAPI training to statisticians using the
popular massive online open course format. The final part provides
a summary of existing CAPI platforms and offers some preliminary
advice for NSOs to consider when selecting a CAPI platform for
their institution. This is a Special Supplement to the Key
Indicators for Asia and the Pacific 2019.
Als Stahl bezeichnet man heute alle Eisenlegierungen - mit Ausnahme
der nicht schmiedbaren hochkohlenstoffhaltigen Gu sorten wie
Grauguli, Hartguf und Ternperguf - ohne Riicksichr auf ihre
Eigenschaften. Friiher wurde als wesentliches Merkmal des Stahles
die Hartbarkeit angesehen. Es gibt aber eine ganze Reihe von
Stahlen, die sich nicht harten lassen, die durch das Abschrecken
aus hohen Temperaturen im Gegenteil sogar weicher, zaher werden.
Edelstdble werden vielfach solche Stahle genannt, die au er mit
Kohlenstoff auch noch mit anderen Grundstoffen, z. B. mit Chrom,
Nickel, Wolfram, Vanadin usw. legiert sind. Diese Begriffsbestim-
mung ist jedoch nicht erschopfend und auch anfechtbar, Denn man
wird einen reinen Kohlenstoffstahl, der sorgfaltig erzeugt und auf
dem ganzen Wege der Herstellung - vom Gu bis zum Versand - immer
wieder gewissenhaft gepriift worden ist, zweifellos auch zu den
Edelstahlen rechnen miissen. Andererseits enthalten manchmal
Massenstahle - auch als unbeabsichtigte Verunreinigungen - ge-
wisse Mengen von Legierungselementen. Das Richtige wird man
treffen, wenn man die bei den grofsen Hiittenwerken in grofien
Mengen erzeugten billigen Stahle als .Mas- senstahle bezeichnet,
die von einem Edelstahlwerk mit Sorgfalt und unter scharfster
Kontrolle hergestellten Stahle dagegen als Edelstahle. Die billigen
Massenstahle werden meistens nach Festigkeit ver- kauft, die
Edelstahle dagegen nach dem Verwendungszweck und unter einer
Markenbezeichnung.
Leverage the Azure analytics platform's key analytics services to
deliver unmatched intelligence for your data Key Features Learn to
ingest, prepare, manage, and serve data for immediate business
requirements Bring enterprise data warehousing and big data
analytics together to gain insights from your data Develop
end-to-end analytics solutions using Azure Synapse Book
DescriptionAzure Synapse Analytics, which Microsoft describes as
the next evolution of Azure SQL Data Warehouse, is a limitless
analytics service that brings enterprise data warehousing and big
data analytics together. With this book, you'll learn how to
discover insights from your data effectively using this platform.
The book starts with an overview of Azure Synapse Analytics, its
architecture, and how it can be used to improve business
intelligence and machine learning capabilities. Next, you'll go on
to choose and set up the correct environment for your business
problem. You'll also learn a variety of ways to ingest data from
various sources and orchestrate the data using transformation
techniques offered by Azure Synapse. Later, you'll explore how to
handle both relational and non-relational data using the SQL
language. As you progress, you'll perform real-time streaming and
execute data analysis operations on your data using various
languages, before going on to apply ML techniques to derive
accurate and granular insights from data. Finally, you'll discover
how to protect sensitive data in real time by using security and
privacy features. By the end of this Azure book, you'll be able to
build end-to-end analytics solutions while focusing on data prep,
data management, data warehousing, and AI tasks. What you will
learn Explore the necessary considerations for data ingestion and
orchestration while building analytical pipelines Understand
pipelines and activities in Synapse pipelines and use them to
construct end-to-end data-driven workflows Query data using various
coding languages on Azure Synapse Focus on Synapse SQL and Synapse
Spark Manage and monitor resource utilization and query activity in
Azure Synapse Connect Power BI workspaces with Azure Synapse and
create or modify reports directly from Synapse Studio Create and
manage IP firewall rules in Azure Synapse Who this book is forThis
book is for data architects, data scientists, data engineers, and
business analysts who are looking to get up and running with the
Azure Synapse Analytics platform. Basic knowledge of data
warehousing will be beneficial to help you understand the concepts
covered in this book more effectively.
Understand the complexities of modern-day data engineering
platforms and explore strategies to deal with them with the help of
use case scenarios led by an industry expert in big data Key
Features Become well-versed with the core concepts of Apache Spark
and Delta Lake for building data platforms Learn how to ingest,
process, and analyze data that can be later used for training
machine learning models Understand how to operationalize data
models in production using curated data Book DescriptionIn the
world of ever-changing data and schemas, it is important to build
data pipelines that can auto-adjust to changes. This book will help
you build scalable data platforms that managers, data scientists,
and data analysts can rely on. Starting with an introduction to
data engineering, along with its key concepts and architectures,
this book will show you how to use Microsoft Azure Cloud services
effectively for data engineering. You'll cover data lake design
patterns and the different stages through which the data needs to
flow in a typical data lake. Once you've explored the main features
of Delta Lake to build data lakes with fast performance and
governance in mind, you'll advance to implementing the lambda
architecture using Delta Lake. Packed with practical examples and
code snippets, this book takes you through real-world examples
based on production scenarios faced by the author in his 10 years
of experience working with big data. Finally, you'll cover data
lake deployment strategies that play an important role in
provisioning the cloud resources and deploying the data pipelines
in a repeatable and continuous way. By the end of this data
engineering book, you'll know how to effectively deal with
ever-changing data and create scalable data pipelines to streamline
data science, ML, and artificial intelligence (AI) tasks. What you
will learn Discover the challenges you may face in the data
engineering world Add ACID transactions to Apache Spark using Delta
Lake Understand effective design strategies to build
enterprise-grade data lakes Explore architectural and design
patterns for building efficient data ingestion pipelines
Orchestrate a data pipeline for preprocessing data using Apache
Spark and Delta Lake APIs Automate deployment and monitoring of
data pipelines in production Get to grips with securing,
monitoring, and managing data pipelines models efficiently Who this
book is forThis book is for aspiring data engineers and data
analysts who are new to the world of data engineering and are
looking for a practical guide to building scalable data platforms.
If you already work with PySpark and want to use Delta Lake for
data engineering, you'll find this book useful. Basic knowledge of
Python, Spark, and SQL is expected.
Data is an increasingly important business asset and enabler for
organisational activities. Data quality is a key aspect of data
management and failure to understand it increases organisational
risk and decreases efficiency and profitability. This book explains
data quality management in practical terms, focusing on three key
areas - the nature of data in enterprises, the purpose and scope of
data quality management, and implementing a data quality management
system, in line with ISO 8000-61.
Quickly build and deploy massive data pipelines and improve
productivity using Azure Databricks Key Features Get to grips with
the distributed training and deployment of machine learning and
deep learning models Learn how ETLs are integrated with Azure Data
Factory and Delta Lake Explore deep learning and machine learning
models in a distributed computing infrastructure Book
DescriptionMicrosoft Azure Databricks helps you to harness the
power of distributed computing and apply it to create robust data
pipelines, along with training and deploying machine learning and
deep learning models. Databricks' advanced features enable
developers to process, transform, and explore data. Distributed
Data Systems with Azure Databricks will help you to put your
knowledge of Databricks to work to create big data pipelines. The
book provides a hands-on approach to implementing Azure Databricks
and its associated methodologies that will make you productive in
no time. Complete with detailed explanations of essential concepts,
practical examples, and self-assessment questions, you'll begin
with a quick introduction to Databricks core functionalities,
before performing distributed model training and inference using
TensorFlow and Spark MLlib. As you advance, you'll explore MLflow
Model Serving on Azure Databricks and implement distributed
training pipelines using HorovodRunner in Databricks. Finally,
you'll discover how to transform, use, and obtain insights from
massive amounts of data to train predictive models and create
entire fully working data pipelines. By the end of this MS Azure
book, you'll have gained a solid understanding of how to work with
Databricks to create and manage an entire big data pipeline. What
you will learn Create ETLs for big data in Azure Databricks Train,
manage, and deploy machine learning and deep learning models
Integrate Databricks with Azure Data Factory for extract,
transform, load (ETL) pipeline creation Discover how to use Horovod
for distributed deep learning Find out how to use Delta Engine to
query and process data from Delta Lake Understand how to use Data
Factory in combination with Databricks Use Structured Streaming in
a production-like environment Who this book is forThis book is for
software engineers, machine learning engineers, data scientists,
and data engineers who are new to Azure Databricks and want to
build high-quality data pipelines without worrying about
infrastructure. Knowledge of Azure Databricks basics is required to
learn the concepts covered in this book more effectively. A basic
understanding of machine learning concepts and beginner-level
Python programming knowledge is also recommended.
|
|