|
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Designed with beginners in mind, this workshop helps you make the
most of Python libraries and the Jupyter Notebook's functionality
to understand how data science can be applied to solve real-world
data problems. Key Features Gain useful insights into data science
and machine learning Explore the different functionalities and
features of a Jupyter Notebook Discover how Python libraries are
used with Jupyter for data analysis Book DescriptionFrom banking
and manufacturing through to education and entertainment, using
data science for business has revolutionized almost every sector in
the modern world. It has an important role to play in everything
from app development to network security. Taking an interactive
approach to learning the fundamentals, this book is ideal for
beginners. You'll learn all the best practices and techniques for
applying data science in the context of real-world scenarios and
examples. Starting with an introduction to data science and machine
learning, you'll start by getting to grips with Jupyter
functionality and features. You'll use Python libraries like
sci-kit learn, pandas, Matplotlib, and Seaborn to perform data
analysis and data preprocessing on real-world datasets from within
your own Jupyter environment. Progressing through the chapters,
you'll train classification models using sci-kit learn, and assess
model performance using advanced validation techniques. Towards the
end, you'll use Jupyter Notebooks to document your research, build
stakeholder reports, and even analyze web performance data. By the
end of The Applied Data Science Workshop, you'll be prepared to
progress from being a beginner to taking your skills to the next
level by confidently applying data science techniques and tools to
real-world projects. What you will learn Understand the key
opportunities and challenges in data science Use Jupyter for data
science tasks such as data analysis and modeling Run exploratory
data analysis within a Jupyter Notebook Visualize data with
pairwise scatter plots and segmented distribution Assess model
performance with advanced validation techniques Parse HTML
responses and analyze HTTP requests Who this book is forIf you are
an aspiring data scientist who wants to build a career in data
science or a developer who wants to explore the applications of
data science from scratch and analyze data in Jupyter using Python
libraries, then this book is for you. Although a brief
understanding of Python programming and machine learning is
recommended to help you grasp the topics covered in the book more
quickly, it is not mandatory.
Build, monitor, and manage real-time data pipelines to create data
engineering infrastructure efficiently using open-source Apache
projects Key Features Become well-versed in data architectures,
data preparation, and data optimization skills with the help of
practical examples Design data models and learn how to extract,
transform, and load (ETL) data using Python Schedule, automate, and
monitor complex data pipelines in production Book DescriptionData
engineering provides the foundation for data science and analytics,
and forms an important part of all businesses. This book will help
you to explore various tools and methods that are used for
understanding the data engineering process using Python. The book
will show you how to tackle challenges commonly faced in different
aspects of data engineering. You'll start with an introduction to
the basics of data engineering, along with the technologies and
frameworks required to build data pipelines to work with large
datasets. You'll learn how to transform and clean data and perform
analytics to get the most out of your data. As you advance, you'll
discover how to work with big data of varying complexity and
production databases, and build data pipelines. Using real-world
examples, you'll build architectures on which you'll learn how to
deploy data pipelines. By the end of this Python book, you'll have
gained a clear understanding of data modeling techniques, and will
be able to confidently build data engineering pipelines for
tracking data, running quality checks, and making necessary changes
in production. What you will learn Understand how data engineering
supports data science workflows Discover how to extract data from
files and databases and then clean, transform, and enrich it
Configure processors for handling different file formats as well as
both relational and NoSQL databases Find out how to implement a
data pipeline and dashboard to visualize results Use staging and
validation to check data before landing in the warehouse Build
real-time pipelines with staging areas that perform validation and
handle failures Get to grips with deploying pipelines in the
production environment Who this book is forThis book is for data
analysts, ETL developers, and anyone looking to get started with or
transition to the field of data engineering or refresh their
knowledge of data engineering using Python. This book will also be
useful for students planning to build a career in data engineering
or IT professionals preparing for a transition. No previous
knowledge of data engineering is required.
Create and share livecode, equations, visualizations, and
explanatory text, in both a single document and a web browser with
Jupyter Key Features Learn how to use Jupyter 5.x features such as
cell tagging and attractive table styles Leverage big data tools
and datasets with different Python packages Explore multiple-user
Jupyter Notebook servers Book DescriptionThe Jupyter Notebook
allows you to create and share documents that contain live code,
equations, visualizations, and explanatory text. The Jupyter
Notebook system is extensively used in domains such as data
cleaning and transformation, numerical simulation, statistical
modeling, and machine learning. Learning Jupyter 5 will help you
get to grips with interactive computing using real-world examples.
The book starts with a detailed overview of the Jupyter Notebook
system and its installation in different environments. Next, you
will learn to integrate the Jupyter system with different
programming languages such as R, Python, Java, JavaScript, and
Julia, and explore various versions and packages that are
compatible with the Notebook system. Moving ahead, you will master
interactive widgets and namespaces and work with Jupyter in a
multi-user mode. By the end of this book, you will have used
Jupyter with a big dataset and be able to apply all the
functionalities you've explored throughout the book. You will also
have learned all about the Jupyter Notebook and be able to start
performing data transformation, numerical simulation, and data
visualization. What you will learn Install and run the Jupyter
Notebook system on your machine Implement programming languages
such as R, Python, Julia, and JavaScript with the Jupyter Notebook
Use interactive widgets to manipulate and visualize data in real
time Start sharing your Notebook with colleagues Invite your
colleagues to work with you on the same Notebook Organize your
Notebook using Jupyter namespaces Access big data in Jupyter for
dealing with large datasets using Spark Who this book is
forLearning Jupyter 5 is for developers, data scientists, machine
learning users, and anyone working on data analysis or data science
projects across different teams. Data science professionals will
also find this book useful for performing technical and scientific
computing collaboratively.
A problem-solution guide to encounter various NLP tasks utilizing
Java open source libraries and cloud-based solutions Key Features
Perform simple-to-complex NLP text processing tasks using modern
Java libraries Extract relationships between different text
complexities using a problem-solution approach Utilize cloud-based
APIs to perform machine translation operations Book
DescriptionNatural Language Processing (NLP) has become one of the
prime technologies for processing very large amounts of
unstructured data from disparate information sources. This book
includes a wide set of recipes and quick methods that solve
challenges in text syntax, semantics, and speech tasks. At the
beginning of the book, you'll learn important NLP techniques, such
as identifying parts of speech, tagging words, and analyzing word
semantics. You will learn how to perform lexical analysis and use
machine learning techniques to speed up NLP operations. With
independent recipes, you will explore techniques for customizing
your existing NLP engines/models using Java libraries such as
OpenNLP and the Stanford NLP library. You will also learn how to
use NLP processing features from cloud-based sources, including
Google and Amazon's AWS. You will master core tasks, such as
stemming, lemmatization, part-of-speech tagging, and named entity
recognition. You will also learn about sentiment analysis, semantic
text similarity, language identification, machine translation, and
text summarization. By the end of this book, you will be ready to
become a professional NLP expert using a problem-solution approach
to analyze any sort of text, sentences, or semantic words. What you
will learn Explore how to use tokenizers in NLP processing
Implement NLP techniques in machine learning and deep learning
applications Identify sentences within the text and learn how to
train specialized NER models Learn how to classify documents and
perform sentiment analysis Find semantic similarities between text
elements and extract text from a variety of sources Preprocess text
from a variety of data sources Learn how to identify and translate
languages Who this book is forThis book is for data scientists, NLP
engineers, and machine learning developers who want to perform
their work on linguistic applications faster with the use of
popular libraries on JVM machines. This book will help you build
real-world NLP applications using a recipe-based approach. Prior
knowledge of Natural Language Processing basics and Java
programming is expected.
The general theme of this book is to present innovative
psychometric modeling and methods. In particular, this book
includes research and successful examples of modeling techniques
for new data sources from digital assessments, such as eye-tracking
data, hint uses, and process data from game-based assessments. In
addition, innovative psychometric modeling approaches, such as
graphical models, item tree models, network analysis, and cognitive
diagnostic models, are included. Chapters 1, 2, 4 and 6 are about
psychometric models and methods for learning analytics. The first
two chapters focus on advanced cognitive diagnostic models for
tracking learning and the improvement of attribute classification
accuracy. Chapter 4 demonstrates the use of network analysis for
learning analytics. Chapter 6 introduces the conjunctive root
causes model for the understanding of prerequisite skills in
learning. Chapters 3, 5, 8, 9 are about innovative psychometric
techniques to model process data. Specifically, Chapters 3 and 5
illustrate the usage of generalized linear mixed effect models and
item tree models to analyze eye-tracking data. Chapter 8 discusses
the modeling approach of hint uses and response accuracy in
learning environment. Chapter 9 demonstrates the identification of
observable outcomes in the game-based assessments. Chapters 7 and
10 introduce innovative latent variable modeling approaches,
including the graphical and generalized linear model approach and
the dynamic modeling approach. In summary, the book includes
theoretical, methodological, and applied research and practices
that serve as the foundation for future development. These chapters
provide illustrations of efforts to model and analyze multiple data
sources from digital assessments. When computer-based assessments
are emerging and evolving, it is important that researchers can
expand and improve the methods for modeling and analyzing new data
sources. This book provides a useful resource to researchers who
are interested in the development of psychometric methods to solve
issues in this digital assessment age.
Build attractive, insightful, and powerful visualizations to gain
quality insights from your data Key Features Master Matplotlib for
data visualization Customize basic plots to make and deploy figures
in cloud environments Explore recipes to design various data
visualizations from simple bar charts to advanced 3D plots Book
DescriptionMatplotlib provides a large library of customizable
plots, along with a comprehensive set of backends. Matplotlib 3.0
Cookbook is your hands-on guide to exploring the world of
Matplotlib, and covers the most effective plotting packages for
Python 3.7. With the help of this cookbook, you'll be able to
tackle any problem you might come across while designing
attractive, insightful data visualizations. With the help of over
150 recipes, you'll learn how to develop plots related to business
intelligence, data science, and engineering disciplines with highly
detailed visualizations. Once you've familiarized yourself with the
fundamentals, you'll move on to developing professional dashboards
with a wide variety of graphs and sophisticated grid layouts in 2D
and 3D. You'll annotate and add rich text to the plots, enabling
the creation of a business storyline. In addition to this, you'll
learn how to save figures and animations in various formats for
downstream deployment, followed by extending the functionality
offered by various internal and third-party toolkits, such as
axisartist, axes_grid, Cartopy, and Seaborn. By the end of this
book, you'll be able to create high-quality customized plots and
deploy them on the web and on supported GUI applications such as
Tkinter, Qt 5, and wxPython by implementing real-world use cases
and examples. What you will learn Develop simple to advanced data
visualizations in Matplotlib Use the pyplot API to quickly develop
and deploy different plots Use object-oriented APIs for maximum
flexibility with the customization of figures Develop interactive
plots with animation and widgets Use maps for geographical plotting
Enrich your visualizations using embedded texts and mathematical
expressions Embed Matplotlib plots into other GUIs used for
developing applications Use toolkits such as axisartist,
axes_grid1, and cartopy to extend the base functionality of
Matplotlib Who this book is forThe Matplotlib 3.0 Cookbook is for
you if you are a data analyst, data scientist, or Python developer
looking for quick recipes for a multitude of visualizations. This
book is also for those who want to build variations of interactive
visualizations.
Get well-versed with traditional as well as modern natural language
processing concepts and techniques Key Features Perform various NLP
tasks to build linguistic applications using Python libraries
Understand, analyze, and generate text to provide accurate results
Interpret human language using various NLP concepts, methodologies,
and tools Book DescriptionNatural Language Processing (NLP) is the
subfield in computational linguistics that enables computers to
understand, process, and analyze text. This book caters to the
unmet demand for hands-on training of NLP concepts and provides
exposure to real-world applications along with a solid theoretical
grounding. This book starts by introducing you to the field of NLP
and its applications, along with the modern Python libraries that
you'll use to build your NLP-powered apps. With the help of
practical examples, you'll learn how to build reasonably
sophisticated NLP applications, and cover various methodologies and
challenges in deploying NLP applications in the real world. You'll
cover key NLP tasks such as text classification, semantic
embedding, sentiment analysis, machine translation, and developing
a chatbot using machine learning and deep learning techniques. The
book will also help you discover how machine learning techniques
play a vital role in making your linguistic apps smart. Every
chapter is accompanied by examples of real-world applications to
help you build impressive NLP applications of your own. By the end
of this NLP book, you'll be able to work with language data, use
machine learning to identify patterns in text, and get acquainted
with the advancements in NLP. What you will learn Understand how
NLP powers modern applications Explore key NLP techniques to build
your natural language vocabulary Transform text data into
mathematical data structures and learn how to improve text mining
models Discover how various neural network architectures work with
natural language data Get the hang of building sophisticated text
processing models using machine learning and deep learning Check
out state-of-the-art architectures that have revolutionized
research in the NLP domain Who this book is forThis NLP Python book
is for anyone looking to learn NLP's theoretical and practical
aspects alike. It starts with the basics and gradually covers
advanced concepts to make it easy to follow for readers with
varying levels of NLP proficiency. This comprehensive guide will
help you develop a thorough understanding of the NLP methodologies
for building linguistic applications; however, working knowledge of
Python programming language and high school level mathematics is
expected.
Ask questions of your data and gain insights to make better
business decisions using the open source business intelligence
tool, Metabase Key Features Deploy Metabase applications to let
users across your organization interact with it Learn to create
data visualizations, charts, reports, and dashboards with the help
of a variety of examples Understand how to embed Metabase into your
website and send out reports automatically using email and Slack
Book DescriptionMetabase is an open source business intelligence
tool that helps you use data to answer questions about your
business. This book will give you a detailed introduction to using
Metabase in your organization to get the most value from your data.
You'll start by installing and setting up Metabase on your local
computer. You'll then progress to handling the administration
aspect of Metabase by learning how to configure and deploy
Metabase, manage accounts, and execute administrative tasks such as
adding users and creating permissions and metadata. Complete with
examples and detailed instructions, this book shows you how to
create different visualizations, charts, and dashboards to gain
insights from your data. As you advance, you'll learn how to share
the results with peers in your organization and cover
production-related aspects such as embedding Metabase and auditing
performance. Throughout the book, you'll explore the entire data
analytics process-from connecting your data sources, visualizing
data, and creating dashboards through to daily reporting. By the
end of this book, you'll be ready to implement Metabase as an
integral tool in your organization. What you will learn Explore
different types of databases and find out how to connect them to
Metabase Deploy and host Metabase securely using Amazon Web
Services Use Metabase's user interface to filter and aggregate data
on single and multiple tables Become a Metabase admin by learning
how to add users and create permissions Answer critical questions
for your organization by using the Notebook editor and writing SQL
queries Use the search functionality to search through tables,
dashboards, and metrics Who this book is forThis book is for
business analysts, data analysts, data scientists, and other
professionals who want to become well-versed with business
intelligence and analytics using Metabase. This book will also
appeal to anyone who wants to understand their data to extract
meaningful insights with the help of practical examples. A basic
understanding of data handling and processing is necessary to get
started with this book.
Add a touch of data analytics to your healthcare systems and get
insightful outcomes Key Features Perform healthcare analytics with
Python and SQL Build predictive models on real healthcare data with
pandas and scikit-learn Use analytics to improve healthcare
performance Book DescriptionIn recent years, machine learning
technologies and analytics have been widely utilized across the
healthcare sector. Healthcare Analytics Made Simple bridges the gap
between practising doctors and data scientists. It equips the data
scientists' work with healthcare data and allows them to gain
better insight from this data in order to improve healthcare
outcomes. This book is a complete overview of machine learning for
healthcare analytics, briefly describing the current healthcare
landscape, machine learning algorithms, and Python and SQL
programming languages. The step-by-step instructions teach you how
to obtain real healthcare data and perform descriptive, predictive,
and prescriptive analytics using popular Python packages such as
pandas and scikit-learn. The latest research results in disease
detection and healthcare image analysis are reviewed. By the end of
this book, you will understand how to use Python for healthcare
data analysis, how to import, collect, clean, and refine data from
electronic health record (EHR) surveys, and how to make predictive
models with this data through real-world algorithms and code
examples. What you will learn Gain valuable insight into healthcare
incentives, finances, and legislation Discover the connection
between machine learning and healthcare processes Use SQL and
Python to analyze data Measure healthcare quality and provider
performance Identify features and attributes to build successful
healthcare models Build predictive models using real-world
healthcare data Become an expert in predictive modeling with
structured clinical data See what lies ahead for healthcare
analytics Who this book is forHealthcare Analytics Made Simple is
for you if you are a developer who has a working knowledge of
Python or a related programming language, although you are new to
healthcare or predictive modeling with healthcare data. Clinicians
interested in analytics and healthcare computing will also benefit
from this book. This book can also serve as a textbook for students
enrolled in an introductory course on machine learning for
healthcare.
Perform supervised and unsupervised machine learning and learn
advanced techniques such as training neural networks. Key Features
Train your own models for effective prediction, using high-level
Keras API Perform supervised and unsupervised machine learning and
learn advanced techniques such as training neural networks Get
acquainted with some new practices introduced in TensorFlow 2.0
Alpha Book DescriptionTensorFlow is one of the most popular machine
learning frameworks in Python. With this book, you will improve
your knowledge of some of the latest TensorFlow features and will
be able to perform supervised and unsupervised machine learning and
also train neural networks. After giving you an overview of what's
new in TensorFlow 2.0 Alpha, the book moves on to setting up your
machine learning environment using the TensorFlow library. You will
perform popular supervised machine learning tasks using techniques
such as linear regression, logistic regression, and clustering. You
will get familiar with unsupervised learning for autoencoder
applications. The book will also show you how to train effective
neural networks using straightforward examples in a variety of
different domains. By the end of the book, you will have been
exposed to a large variety of machine learning and neural network
TensorFlow techniques. What you will learn Use tf.Keras for fast
prototyping, building, and training deep learning neural network
models Easily convert your TensorFlow 1.12 applications to
TensorFlow 2.0-compatible files Use TensorFlow to tackle
traditional supervised and unsupervised machine learning
applications Understand image recognition techniques using
TensorFlow Perform neural style transfer for image hybridization
using a neural network Code a recurrent neural network in
TensorFlow to perform text-style generation Who this book is
forData scientists, machine learning developers, and deep learning
enthusiasts looking to quickly get started with TensorFlow 2 will
find this book useful. Some Python programming experience with
version 3.6 or later, along with a familiarity with Jupyter
notebooks will be an added advantage. Exposure to machine learning
and neural network techniques would also be helpful.
With the advancement of technology in the modern world, the
constant influx of data, information, and computing can become
droning and one-dimensional. Re-examining these methods through a
different approach helps highlight broader perspectives and further
understanding. Applying abstract and holistic methods, such as
nature and visualization, to computing technologies is a developing
area of study but has yet to be empirically researched. Graphical
Thinking for Science and Technology Through Knowledge Visualization
provides emerging research exploring the theoretical and practical
aspects of implementing visuals and images within data and
information. The text contains projects, examples of students'
solutions, and invites the reader to apply graphical thinking.
Featuring coverage on a broad range of topics such as nanoscale
structures, computer graphics, and data visualization, this book is
ideally designed for software engineers, instructional designers,
researchers, scientists, artists, marketers, media professionals,
and students seeking current research on applying artistic
solutions within information and computing.
Develop blockchain application with step-by-step instructions,
working example and helpful recommendations Key Features
Understanding the blockchain technology from the cybersecurity
perspective Developing cyber security solutions with Ethereum
blockchain technology Understanding real-world deployment of
blockchain based applications Book DescriptionBlockchain technology
is being welcomed as one of the most revolutionary and impactful
innovations of today. Blockchain technology was first identified in
the world's most popular digital currency, Bitcoin, but has now
changed the outlook of several organizations and empowered them to
use it even for storage and transfer of value. This book will start
by introducing you to the common cyberthreat landscape and common
attacks such as malware, phishing, insider threats, and DDoS. The
next set of chapters will help you to understand the workings of
Blockchain technology, Ethereum and Hyperledger architecture and
how they fit into the cybersecurity ecosystem. These chapters will
also help you to write your first distributed application on
Ethereum Blockchain and the Hyperledger Fabric framework. Later,
you will learn about the security triad and its adaptation with
Blockchain. The last set of chapters will take you through the core
concepts of cybersecurity, such as DDoS protection, PKI-based
identity, 2FA, and DNS security. You will learn how Blockchain
plays a crucial role in transforming cybersecurity solutions.
Toward the end of the book, you will also encounter some real-world
deployment examples of Blockchain in security cases, and also
understand the short-term challenges and future of cybersecurity
with Blockchain. What you will learn Understand the cyberthreat
landscape Learn about Ethereum and Hyperledger Blockchain Program
Blockchain solutions Build Blockchain-based apps for 2FA, and DDoS
protection Develop Blockchain-based PKI solutions and apps for
storing DNS entries Challenges and the future of cybersecurity and
Blockchain Who this book is forThe book is targeted towards
security professionals, or any stakeholder dealing with
cybersecurity who wants to understand the next-level of securing
infrastructure using Blockchain. Basic understanding of Blockchain
can be an added advantage.
Learn the fundamentals of clean, effective Python coding and build
the practical skills to tackle your own software development or
data science projects Key Features Build key Python skills with
engaging development tasks and challenging activities Implement
useful algorithms and write programs to solve real-world problems
Apply Python in realistic data science projects and create simple
machine learning models Book DescriptionHave you always wanted to
learn Python, but never quite known how to start? More applications
than we realize are being developed using Python because it is easy
to learn, read, and write. You can now start learning the language
quickly and effectively with the help of this interactive tutorial.
The Python Workshop starts by showing you how to correctly apply
Python syntax to write simple programs, and how to use appropriate
Python structures to store and retrieve data. You'll see how to
handle files, deal with errors, and use classes and methods to
write concise, reusable, and efficient code. As you advance, you'll
understand how to use the standard library, debug code to
troubleshoot problems, and write unit tests to validate application
behavior. You'll gain insights into using the pandas and NumPy
libraries for analyzing data, and the graphical libraries of
Matplotlib and Seaborn to create impactful data visualizations. By
focusing on entry-level data science, you'll build your practical
Python skills in a way that mirrors real-world development.
Finally, you'll discover the key steps in building and using simple
machine learning algorithms. By the end of this Python book, you'll
have the knowledge, skills and confidence to creatively tackle your
own ambitious projects with Python. What you will learn Write clean
and well-commented code that is easy to maintain Automate essential
day-to-day tasks with Python scripts Debug logical errors and
handle exceptions in your programs Explore data science
fundamentals and create engaging visualizations Get started with
predictive machine learning Keep your development process bug-free
with automated testing Who this book is forThis book is designed
for anyone who is new to the Python programming language. Whether
you're an aspiring software engineer or data scientist, or are just
curious about learning how to code with Python, this book is for
you. No prior programming experience is required.
Leverage the power of various Google Cloud AI Services by building
a smart web application using MEAN Stack Key Features Start working
with the Google Cloud Platform and the AI services it offers Build
smart web applications by combining the power of Google Cloud AI
services and the MEAN stack Build a web-based dashboard of smart
applications that perform language processing, translation, and
computer vision on the cloud Book DescriptionCognitive services are
the new way of adding intelligence to applications and services.
Now we can use Artificial Intelligence as a service that can be
consumed by any application or other service, to add smartness and
make the end result more practical and useful. Google Cloud AI
enables you to consume Artificial Intelligence within your
applications, from a REST API. Text, video and speech analysis are
among the powerful machine learning features that can be used. This
book is the easiest way to get started with the Google Cloud AI
services suite and open up the world of smarter applications. This
book will help you build a Smart Exchange, a forum application that
will let you upload videos, images and perform text to speech
conversions and translation services. You will use the power of
Google Cloud AI Services to make our simple forum application smart
by validating the images, videos, and text provided by users to
Google Cloud AI Services and make sure the content which is
uploaded follows the forum standards, without a human curator
involvement. You will learn how to work with the Vision API, Video
Intelligence API, Speech Recognition API, Cloud Language Process,
and Cloud Translation API services to make your application
smarter. By the end of this book, you will have a strong
understanding of working with Google Cloud AI Services, and be well
on the way to building smarter applications. What you will learn
Understand Google Cloud Platform and its Cloud AI services Explore
the Google ML Services Work with an Angular 5 MEAN stack
application Integrate Vision API, Video Intelligence API for
computer vision Be ready for conversational experiences with the
Speech Recognition API, Cloud Language Process and Cloud
Translation API services Build a smart web application that uses
the power of Google Cloud AI services to make apps smarter Who this
book is forThis book is ideal for data professionals and web
developers who want to use the power of Google Cloud AI services in
their projects, without the going through the pain of mastering
machine learning for images, videos and text. Some familiarity with
the Google Cloud Platform will be helpful.
Perform advanced dashboard, visualization, and analytical
techniques with Tableau Desktop, Tableau Prep, and Tableau Server
Key Features Unique problem-solution approach to aid effective
business decision-making Create interactive dashboards and
implement powerful business intelligence solutions Includes best
practices on using Tableau with modern cloud analytics services
Book DescriptionTableau has been one of the most popular business
intelligence solutions in recent times, thanks to its powerful and
interactive data visualization capabilities. Tableau 2019.x
Cookbook is full of useful recipes from industry experts, who will
help you master Tableau skills and learn each aspect of Tableau's
ecosystem. This book is enriched with features such as Tableau
extracts, Tableau advanced calculations, geospatial analysis, and
building dashboards. It will guide you with exciting data
manipulation, storytelling, advanced filtering, expert
visualization, and forecasting techniques using real-world
examples. From basic functionalities of Tableau to complex
deployment on Linux, you will cover it all. Moreover, you will
learn advanced features of Tableau using R, Python, and various
APIs. You will learn how to prepare data for analysis using the
latest Tableau Prep. In the concluding chapters, you will learn how
Tableau fits the modern world of analytics and works with modern
data platforms such as Snowflake and Redshift. In addition, you
will learn about the best practices of integrating Tableau with ETL
using Matillion ETL. By the end of the book, you will be ready to
tackle business intelligence challenges using Tableau's features.
What you will learn Understand the basic and advanced skills of
Tableau Desktop Implement best practices of visualization,
dashboard, and storytelling Learn advanced analytics with the use
of build in statistics Deploy the multi-node server on Linux and
Windows Use Tableau with big data sources such as Hadoop, Athena,
and Spectrum Cover Tableau built-in functions for forecasting using
R packages Combine, shape, and clean data for analysis using
Tableau Prep Extend Tableau's functionalities with REST API and
R/Python Who this book is forTableau 2019.x Cookbook is for data
analysts, data engineers, BI developers, and users who are looking
for quick solutions to common and not-so-common problems faced
while using Tableau products. Put each recipe into practice by
bringing the latest offerings of Tableau 2019.x to solve real-world
analytics and business intelligence challenges. Some understanding
of BI concepts and Tableau is required.
A practical guide for solving complex data processing challenges by
applying the best optimizations techniques in Apache Spark. Key
Features Learn about the core concepts and the latest developments
in Apache Spark Master writing efficient big data applications with
Spark's built-in modules for SQL, Streaming, Machine Learning and
Graph analysis Get introduced to a variety of optimizations based
on the actual experience Book DescriptionApache Spark is a flexible
framework that allows processing of batch and real-time data. Its
unified engine has made it quite popular for big data use cases.
This book will help you to get started with Apache Spark 2.0 and
write big data applications for a variety of use cases. It will
also introduce you to Apache Spark - one of the most popular Big
Data processing frameworks. Although this book is intended to help
you get started with Apache Spark, but it also focuses on
explaining the core concepts. This practical guide provides a quick
start to the Spark 2.0 architecture and its components. It teaches
you how to set up Spark on your local machine. As we move ahead,
you will be introduced to resilient distributed datasets (RDDs) and
DataFrame APIs, and their corresponding transformations and
actions. Then, we move on to the life cycle of a Spark application
and learn about the techniques used to debug slow-running
applications. You will also go through Spark's built-in modules for
SQL, streaming, machine learning, and graph analysis. Finally, the
book will lay out the best practices and optimization techniques
that are key for writing efficient Spark applications. By the end
of this book, you will have a sound fundamental understanding of
the Apache Spark framework and you will be able to write and
optimize Spark applications. What you will learn Learn core
concepts such as RDDs, DataFrames, transformations, and more Set up
a Spark development environment Choose the right APIs for your
applications Understand Spark's architecture and the execution flow
of a Spark application Explore built-in modules for SQL, streaming,
ML, and graph analysis Optimize your Spark job for better
performance Who this book is forIf you are a big data enthusiast
and love processing huge amount of data, this book is for you. If
you are data engineer and looking for the best optimization
techniques for your Spark applications, then you will find this
book helpful. This book also helps data scientists who want to
implement their machine learning algorithms in Spark. You need to
have a basic understanding of any one of the programming languages
such as Scala, Python or Java.
Explore and implement deep learning to solve various real-world
problems using modern R libraries such as TensorFlow, MXNet, H2O,
and Deepnet Key Features Understand deep learning algorithms and
architectures using R and determine which algorithm is best suited
for a specific problem Improve models using parameter tuning,
feature engineering, and ensembling Apply advanced neural network
models such as deep autoencoders and generative adversarial
networks (GANs) across different domains Book DescriptionDeep
learning enables efficient and accurate learning from a massive
amount of data. This book will help you overcome a number of
challenges using various deep learning algorithms and architectures
with R programming. This book starts with a brief overview of
machine learning and deep learning and how to build your first
neural network. You'll understand the architecture of various deep
learning algorithms and their applicable fields, learn how to build
deep learning models, optimize hyperparameters, and evaluate model
performance. Various deep learning applications in image
processing, natural language processing (NLP), recommendation
systems, and predictive analytics will also be covered. Later
chapters will show you how to tackle recognition problems such as
image recognition and signal detection, programmatically summarize
documents, conduct topic modeling, and forecast stock market
prices. Toward the end of the book, you will learn the common
applications of GANs and how to build a face generation model using
them. Finally, you'll get to grips with using reinforcement
learning and deep reinforcement learning to solve various
real-world problems. By the end of this deep learning book, you
will be able to build and deploy your own deep learning
applications using appropriate frameworks and algorithms. What you
will learn Design a feedforward neural network to see how the
activation function computes an output Create an image recognition
model using convolutional neural networks (CNNs) Prepare data,
decide hidden layers and neurons and train your model with the
backpropagation algorithm Apply text cleaning techniques to remove
uninformative text using NLP Build, train, and evaluate a GAN model
for face generation Understand the concept and implementation of
reinforcement learning in R Who this book is forThis book is for
data scientists, machine learning engineers, and deep learning
developers who are familiar with machine learning and are looking
to enhance their knowledge of deep learning using practical
examples. Anyone interested in increasing the efficiency of their
machine learning applications and exploring various options in R
will also find this book useful. Basic knowledge of machine
learning techniques and working knowledge of the R programming
language is expected.
This book examines how cloud-based services challenge the current
application of antitrust and privacy laws in the EU and the US. The
author looks at the elements of data centers, the way information
is organized, and how antitrust, competition and privacy laws in
the US and the EU regulate cloud-based services and their market
practices. She discusses how platform interoperability can be a
driver of incremental innovation and the consequences of not
promoting radical innovation. She evaluates applications of
predictive analysis based on big data as well as deriving
privacy-invasive conduct. She looks at the way antitrust and
privacy laws approach consumer protection and how lawmakers can
reach more balanced outcomes by understanding the technical
background of cloud-based services.
Learn how to analyze data using Python models with the help of
real-world use cases and guidance from industry experts Key
Features Get to grips with data analysis by studying use cases from
different fields Develop your critical thinking skills by following
tried-and-true data analysis Learn how to use conclusions from data
analyses to make better business decisions Book
DescriptionBusinesses today operate online and generate data almost
continuously. While not all data in its raw form may seem useful,
if processed and analyzed correctly, it can provide you with
valuable hidden insights. The Data Analysis Workshop will help you
learn how to discover these hidden patterns in your data, to
analyze them, and leverage the results to help transform your
business. The book begins by taking you through the use case of a
bike rental shop. You'll be shown how to correlate data, plot
histograms, and analyze temporal features. As you progress, you'll
learn how to plot data for a hydraulic system using the Seaborn and
Matplotlib libraries, and explore a variety of use cases that show
you how to join and merge databases, prepare data for analysis, and
handle imbalanced data. By the end of the book, you'll have learned
different data analysis techniques, including hypothesis testing,
correlation, and null-value imputation, and will have become a
confident data analyst. What you will learn Get to grips with the
fundamental concepts and conventions of data analysis Understand
how different algorithms help you to analyze the data effectively
Determine the variation between groups of data using hypothesis
testing Visualize your data correctly using appropriate plotting
points Use correlation techniques to uncover the relationship
between variables Find hidden patterns in data using advanced
techniques and strategies Who this book is forThe Data Analysis
Workshop is for programmers who already know how to code in Python
and want to use it to perform data analysis. If you are looking to
gain practical experience in data science with Python, this book is
for you.
Learn the most powerful and primary programming language for
writing smart contracts and find out how to write, deploy, and test
smart contracts in Ethereum. Key Features Get you up and running
with Solidity Programming language Build Ethereum Smart Contracts
with Solidity as your scripting language Learn to test and deploy
the smart contract to your private Blockchain Book
DescriptionSolidity is a contract-oriented language whose syntax is
highly influenced by JavaScript, and is designed to compile code
for the Ethereum Virtual Machine. Solidity Programming Essentials
will be your guide to understanding Solidity programming to build
smart contracts for Ethereum and blockchain from ground-up. We
begin with a brief run-through of blockchain, Ethereum, and their
most important concepts or components. You will learn how to
install all the necessary tools to write, test, and debug Solidity
contracts on Ethereum. Then, you will explore the layout of a
Solidity source file and work with the different data types. The
next set of recipes will help you work with operators, control
structures, and data structures while building your smart
contracts. We take you through function calls, return types,
function modifers, and recipes in object-oriented programming with
Solidity. Learn all you can on event logging and exception
handling, as well as testing and debugging smart contracts. By the
end of this book, you will be able to write, deploy, and test smart
contracts in Ethereum. This book will bring forth the essence of
writing contracts using Solidity and also help you develop Solidity
skills in no time. What you will learn Learn the basics and
foundational concepts of Solidity and Ethereum Explore the Solidity
language and its uniqueness in depth Create new accounts and submit
transactions to blockchain Get to know the complete language in
detail to write smart contracts Learn about major tools to develop
and deploy smart contracts Write defensive code using exception
handling and error checking Understand Truffle basics and the
debugging process Who this book is forThis book is for anyone who
would like to get started with Solidity Programming for developing
an Ethereum smart contract. No prior knowledge of EVM is required.
|
|