|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Leverage the full potential of SAS to get unique, actionable
insights from your data Key Features Build enterprise-class data
solutions using SAS and become well-versed in SAS programming Work
with different data structures, and run SQL queries to manipulate
your data Explore essential concepts and techniques with practical
examples to confidently pass the SAS certification exam Book
DescriptionSAS is one of the leading enterprise tools in the world
today when it comes to data management and analysis. It enables the
fast and easy processing of data and helps you gain valuable
business insights for effective decision-making. This book will
serve as a comprehensive guide that will prepare you for the SAS
certification exam. After a quick overview of the SAS architecture
and components, the book will take you through the different
approaches to importing and reading data from different sources
using SAS. You will then cover SAS Base and 4GL, understanding data
management and analysis, along with exploring SAS functions for
data manipulation and transformation. Next, you'll discover SQL
procedures and get up to speed on creating and validating queries.
In the concluding chapters, you'll learn all about data
visualization, right from creating bar charts and sample geographic
maps through to assigning patterns and formats. In addition to
this, the book will focus on macro programming and its advanced
aspects. By the end of this book, you will be well versed in SAS
programming and have the skills you need to easily handle and
manage your data-related problems in SAS. What you will learn
Explore a variety of SAS modules and packages for efficient data
analysis Use SAS 4GL functions to manipulate, merge, sort, and
transform data Gain useful insights into advanced PROC SQL options
in SAS to interact with data Get to grips with SAS Macro and define
your own macros to share data Discover the different graphical
libraries to shape and visualize data with Apply the SAS Output
Delivery System to prepare detailed reports Who this book is
forBudding or experienced data professionals who want to get
started with SAS will benefit from this book. Those looking to
prepare for the SAS certification exam will also find this book to
be a useful resource. Some understanding of basic data management
concepts will help you get the most out of this book.
Comprehensive recipes to give you valuable insights on
Transformers, Reinforcement Learning, and more Key Features Deep
Learning solutions from Kaggle Masters and Google Developer Experts
Get to grips with the fundamentals including variables, matrices,
and data sources Learn advanced techniques to make your algorithms
faster and more accurate Book DescriptionThe independent recipes in
Machine Learning Using TensorFlow Cookbook will teach you how to
perform complex data computations and gain valuable insights into
your data. Dive into recipes on training models, model evaluation,
sentiment analysis, regression analysis, artificial neural
networks, and deep learning - each using Google's machine learning
library, TensorFlow. This cookbook covers the fundamentals of the
TensorFlow library, including variables, matrices, and various data
sources. You'll discover real-world implementations of Keras and
TensorFlow and learn how to use estimators to train linear models
and boosted trees, both for classification and regression. Explore
the practical applications of a variety of deep learning
architectures, such as recurrent neural networks and Transformers,
and see how they can be used to solve computer vision and natural
language processing (NLP) problems. With the help of this book, you
will be proficient in using TensorFlow, understand deep learning
from the basics, and be able to implement machine learning
algorithms in real-world scenarios. What you will learn Take
TensorFlow into production Implement and fine-tune Transformer
models for various NLP tasks Apply reinforcement learning
algorithms using the TF-Agents framework Understand linear
regression techniques and use Estimators to train linear models
Execute neural networks and improve predictions on tabular data
Master convolutional neural networks and recurrent neural networks
through practical recipes Who this book is forIf you are a data
scientist or a machine learning engineer, and you want to skip
detailed theoretical explanations in favor of building
production-ready machine learning models using TensorFlow, this
book is for you. Basic familiarity with Python, linear algebra,
statistics, and machine learning is necessary to make the most out
of this book.
Explore expert techniques such as advanced indexing and high
availability to build scalable, reliable, and fault-tolerant
database applications using PostgreSQL 13 Key Features Master
advanced PostgreSQL 13 concepts with the help of real-world
datasets and examples Leverage PostgreSQL's indexing features to
fine-tune the performance of your queries Extend PostgreSQL's
functionalities to suit your organization's needs with minimal
effort Book DescriptionThanks to its reliability, robustness, and
high performance, PostgreSQL has become one of the most advanced
open source databases on the market. This updated fourth edition
will help you understand PostgreSQL administration and how to build
dynamic database solutions for enterprise apps with the latest
release of PostgreSQL, including designing both physical and
technical aspects of the system architecture with ease. Starting
with an introduction to the new features in PostgreSQL 13, this
book will guide you in building efficient and fault-tolerant
PostgreSQL apps. You'll explore advanced PostgreSQL features, such
as logical replication, database clusters, performance tuning,
advanced indexing, monitoring, and user management, to manage and
maintain your database. You'll then work with the PostgreSQL
optimizer, configure PostgreSQL for high speed, and move from
Oracle to PostgreSQL. The book also covers transactions, locking,
and indexes, and shows you how to improve performance with query
optimization. You'll also focus on how to manage network security
and work with backups and replication while exploring useful
PostgreSQL extensions that optimize the performance of large
databases. By the end of this PostgreSQL book, you'll be able to
get the most out of your database by executing advanced
administrative tasks. What you will learn Get well versed with
advanced SQL functions in PostgreSQL 13 Get to grips with
administrative tasks such as log file management and monitoring
Work with stored procedures and manage backup and recovery Employ
replication and failover techniques to reduce data loss Perform
database migration from Oracle to PostgreSQL with ease Replicate
PostgreSQL database systems to create backups and scale your
database Manage and improve server security to protect your data
Troubleshoot your PostgreSQL instance to find solutions to common
and not-so-common problems Who this book is forThis database
administration book is for PostgreSQL developers and database
administrators and professionals who want to implement advanced
functionalities and master complex administrative tasks with
PostgreSQL 13. Prior experience in PostgreSQL and familiarity with
the basics of database administration will assist with
understanding key concepts covered in the book.
One-stop solution for NLP practitioners, ML developers, and data
scientists to build effective NLP systems that can perform
real-world complicated tasks Key Features Apply deep learning
algorithms and techniques such as BiLSTMS, CRFs, BPE and more using
TensorFlow 2 Explore applications like text generation,
summarization, weakly supervised labelling and more Read cutting
edge material with seminal papers provided in the GitHub repository
with full working code Book DescriptionRecently, there have been
tremendous advances in NLP, and we are now moving from research
labs into practical applications. This book comes with a perfect
blend of both the theoretical and practical aspects of trending and
complex NLP techniques. The book is focused on innovative
applications in the field of NLP, language generation, and dialogue
systems. It helps you apply the concepts of pre-processing text
using techniques such as tokenization, parts of speech tagging, and
lemmatization using popular libraries such as Stanford NLP and
SpaCy. You will build Named Entity Recognition (NER) from scratch
using Conditional Random Fields and Viterbi Decoding on top of
RNNs. The book covers key emerging areas such as generating text
for use in sentence completion and text summarization, bridging
images and text by generating captions for images, and managing
dialogue aspects of chatbots. You will learn how to apply transfer
learning and fine-tuning using TensorFlow 2. Further, it covers
practical techniques that can simplify the labelling of textual
data. The book also has a working code that is adaptable to your
use cases for each tech piece. By the end of the book, you will
have an advanced knowledge of the tools, techniques and deep
learning architecture used to solve complex NLP problems. What you
will learn Grasp important pre-steps in building NLP applications
like POS tagging Use transfer and weakly supervised learning using
libraries like Snorkel Do sentiment analysis using BERT Apply
encoder-decoder NN architectures and beam search for summarizing
texts Use Transformer models with attention to bring images and
text together Build apps that generate captions and answer
questions about images using custom Transformers Use advanced
TensorFlow techniques like learning rate annealing, custom layers,
and custom loss functions to build the latest DeepNLP models Who
this book is forThis is not an introductory book and assumes the
reader is familiar with basics of NLP and has fundamental Python
skills, as well as basic knowledge of machine learning and
undergraduate-level calculus and linear algebra. The readers who
can benefit the most from this book include intermediate ML
developers who are familiar with the basics of supervised learning
and deep learning techniques and professionals who already use
TensorFlow/Python for purposes such as data science, ML, research,
analysis, etc.
Reinforce your understanding of data science and data analysis from
a statistical perspective to extract meaningful insights from your
data using Python programming Key Features Work your way through
the entire data analysis pipeline with statistics concerns in mind
to make reasonable decisions Understand how various data science
algorithms function Build a solid foundation in statistics for data
science and machine learning using Python-based examples Book
DescriptionStatistics remain the backbone of modern analysis tasks,
helping you to interpret the results produced by data science
pipelines. This book is a detailed guide covering the math and
various statistical methods required for undertaking data science
tasks. The book starts by showing you how to preprocess data and
inspect distributions and correlations from a statistical
perspective. You'll then get to grips with the fundamentals of
statistical analysis and apply its concepts to real-world datasets.
As you advance, you'll find out how statistical concepts emerge
from different stages of data science pipelines, understand the
summary of datasets in the language of statistics, and use it to
build a solid foundation for robust data products such as
explanatory models and predictive models. Once you've uncovered the
working mechanism of data science algorithms, you'll cover
essential concepts for efficient data collection, cleaning, mining,
visualization, and analysis. Finally, you'll implement statistical
methods in key machine learning tasks such as classification,
regression, tree-based methods, and ensemble learning. By the end
of this Essential Statistics for Non-STEM Data Analysts book,
you'll have learned how to build and present a self-contained,
statistics-backed data product to meet your business goals. What
you will learn Find out how to grab and load data into an analysis
environment Perform descriptive analysis to extract meaningful
summaries from data Discover probability, parameter estimation,
hypothesis tests, and experiment design best practices Get to grips
with resampling and bootstrapping in Python Delve into statistical
tests with variance analysis, time series analysis, and A/B test
examples Understand the statistics behind popular machine learning
algorithms Answer questions on statistics for data scientist
interviews Who this book is forThis book is an entry-level guide
for data science enthusiasts, data analysts, and anyone starting
out in the field of data science and looking to learn the essential
statistical concepts with the help of simple explanations and
examples. If you're a developer or student with a non-mathematical
background, you'll find this book useful. Working knowledge of the
Python programming language is required.
Kickstart your NLP journey by exploring BERT and its variants such
as ALBERT, RoBERTa, DistilBERT, VideoBERT, and more with Hugging
Face's transformers library Key Features Explore the encoder and
decoder of the transformer model Become well-versed with BERT along
with ALBERT, RoBERTa, and DistilBERT Discover how to pre-train and
fine-tune BERT models for several NLP tasks Book DescriptionBERT
(bidirectional encoder representations from transformer) has
revolutionized the world of natural language processing (NLP) with
promising results. This book is an introductory guide that will
help you get to grips with Google's BERT architecture. With a
detailed explanation of the transformer architecture, this book
will help you understand how the transformer's encoder and decoder
work. You'll explore the BERT architecture by learning how the BERT
model is pre-trained and how to use pre-trained BERT for downstream
tasks by fine-tuning it for NLP tasks such as sentiment analysis
and text summarization with the Hugging Face transformers library.
As you advance, you'll learn about different variants of BERT such
as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is
used for NLP tasks like question answering. You'll also cover
simpler and faster BERT variants based on knowledge distillation
such as DistilBERT and TinyBERT. The book takes you through MBERT,
XLM, and XLM-R in detail and then introduces you to sentence-BERT,
which is used for obtaining sentence representation. Finally,
you'll discover domain-specific BERT models such as BioBERT and
ClinicalBERT, and discover an interesting variant called VideoBERT.
By the end of this BERT book, you'll be well-versed with using BERT
and its variants for performing practical NLP tasks. What you will
learn Understand the transformer model from the ground up Find out
how BERT works and pre-train it using masked language model (MLM)
and next sentence prediction (NSP) tasks Get hands-on with BERT by
learning to generate contextual word and sentence embeddings
Fine-tune BERT for downstream tasks Get to grips with ALBERT,
RoBERTa, ELECTRA, and SpanBERT models Get the hang of the BERT
models based on knowledge distillation Understand cross-lingual
models such as XLM and XLM-R Explore Sentence-BERT, VideoBERT, and
BART Who this book is forThis book is for NLP professionals and
data scientists looking to simplify NLP tasks to enable efficient
language understanding using BERT. A basic understanding of NLP
concepts and deep learning is required to get the best out of this
book.
Understand data analysis pipelines using machine learning
algorithms and techniques with this practical guide Key Features
Prepare and clean your data to use it for exploratory analysis,
data manipulation, and data wrangling Discover supervised,
unsupervised, probabilistic, and Bayesian machine learning methods
Get to grips with graph processing and sentiment analysis Book
DescriptionData analysis enables you to generate value from small
and big data by discovering new patterns and trends, and Python is
one of the most popular tools for analyzing a wide variety of data.
With this book, you'll get up and running using Python for data
analysis by exploring the different phases and methodologies used
in data analysis and learning how to use modern libraries from the
Python ecosystem to create efficient data pipelines. Starting with
the essential statistical and data analysis fundamentals using
Python, you'll perform complex data analysis and modeling, data
manipulation, data cleaning, and data visualization using
easy-to-follow examples. You'll then understand how to conduct time
series analysis and signal processing using ARMA models. As you
advance, you'll get to grips with smart processing and data
analytics using machine learning algorithms such as regression,
classification, Principal Component Analysis (PCA), and clustering.
In the concluding chapters, you'll work on real-world examples to
analyze textual and image data using natural language processing
(NLP) and image analytics techniques, respectively. Finally, the
book will demonstrate parallel computing using Dask. By the end of
this data analysis book, you'll be equipped with the skills you
need to prepare data for analysis and create meaningful data
visualizations for forecasting values from data. What you will
learn Explore data science and its various process models Perform
data manipulation using NumPy and pandas for aggregating, cleaning,
and handling missing values Create interactive visualizations using
Matplotlib, Seaborn, and Bokeh Retrieve, process, and store data in
a wide range of formats Understand data preprocessing and feature
engineering using pandas and scikit-learn Perform time series
analysis and signal processing using sunspot cycle data Analyze
textual data and image data to perform advanced analysis Get up to
speed with parallel computing using Dask Who this book is forThis
book is for data analysts, business analysts, statisticians, and
data scientists looking to learn how to use Python for data
analysis. Students and academic faculties will also find this book
useful for learning and teaching Python data analysis using a
hands-on approach. A basic understanding of math and working
knowledge of the Python programming language will help you get
started with this book.
Explore and implement deep learning to solve various real-world
problems using modern R libraries such as TensorFlow, MXNet, H2O,
and Deepnet Key Features Understand deep learning algorithms and
architectures using R and determine which algorithm is best suited
for a specific problem Improve models using parameter tuning,
feature engineering, and ensembling Apply advanced neural network
models such as deep autoencoders and generative adversarial
networks (GANs) across different domains Book DescriptionDeep
learning enables efficient and accurate learning from a massive
amount of data. This book will help you overcome a number of
challenges using various deep learning algorithms and architectures
with R programming. This book starts with a brief overview of
machine learning and deep learning and how to build your first
neural network. You'll understand the architecture of various deep
learning algorithms and their applicable fields, learn how to build
deep learning models, optimize hyperparameters, and evaluate model
performance. Various deep learning applications in image
processing, natural language processing (NLP), recommendation
systems, and predictive analytics will also be covered. Later
chapters will show you how to tackle recognition problems such as
image recognition and signal detection, programmatically summarize
documents, conduct topic modeling, and forecast stock market
prices. Toward the end of the book, you will learn the common
applications of GANs and how to build a face generation model using
them. Finally, you'll get to grips with using reinforcement
learning and deep reinforcement learning to solve various
real-world problems. By the end of this deep learning book, you
will be able to build and deploy your own deep learning
applications using appropriate frameworks and algorithms. What you
will learn Design a feedforward neural network to see how the
activation function computes an output Create an image recognition
model using convolutional neural networks (CNNs) Prepare data,
decide hidden layers and neurons and train your model with the
backpropagation algorithm Apply text cleaning techniques to remove
uninformative text using NLP Build, train, and evaluate a GAN model
for face generation Understand the concept and implementation of
reinforcement learning in R Who this book is forThis book is for
data scientists, machine learning engineers, and deep learning
developers who are familiar with machine learning and are looking
to enhance their knowledge of deep learning using practical
examples. Anyone interested in increasing the efficiency of their
machine learning applications and exploring various options in R
will also find this book useful. Basic knowledge of machine
learning techniques and working knowledge of the R programming
language is expected.
Get to grips with automated machine learning and adopt a hands-on
approach to AutoML implementation and associated methodologies Key
Features Get up to speed with AutoML using OSS, Azure, AWS, GCP, or
any platform of your choice Eliminate mundane tasks in data
engineering and reduce human errors in machine learning models Find
out how you can make machine learning accessible for all users to
promote decentralized processes Book DescriptionEvery machine
learning engineer deals with systems that have hyperparameters, and
the most basic task in automated machine learning (AutoML) is to
automatically set these hyperparameters to optimize performance.
The latest deep neural networks have a wide range of
hyperparameters for their architecture, regularization, and
optimization, which can be customized effectively to save time and
effort. This book reviews the underlying techniques of automated
feature engineering, model and hyperparameter tuning,
gradient-based approaches, and much more. You'll discover different
ways of implementing these techniques in open source tools and then
learn to use enterprise tools for implementing AutoML in three
major cloud service providers: Microsoft Azure, Amazon Web Services
(AWS), and Google Cloud Platform. As you progress, you'll explore
the features of cloud AutoML platforms by building machine learning
models using AutoML. The book will also show you how to develop
accurate models by automating time-consuming and repetitive tasks
in the machine learning development lifecycle. By the end of this
machine learning book, you'll be able to build and deploy AutoML
models that are not only accurate, but also increase productivity,
allow interoperability, and minimize feature engineering tasks.
What you will learn Explore AutoML fundamentals, underlying
methods, and techniques Assess AutoML aspects such as algorithm
selection, auto featurization, and hyperparameter tuning in an
applied scenario Find out the difference between cloud and
operations support systems (OSS) Implement AutoML in enterprise
cloud to deploy ML models and pipelines Build explainable AutoML
pipelines with transparency Understand automated feature
engineering and time series forecasting Automate data science
modeling tasks to implement ML solutions easily and focus on more
complex problems Who this book is forCitizen data scientists,
machine learning developers, artificial intelligence enthusiasts,
or anyone looking to automatically build machine learning models
using the features offered by open source tools, Microsoft Azure
Machine Learning, AWS, and Google Cloud Platform will find this
book useful. Beginner-level knowledge of building ML models is
required to get the best out of this book. Prior experience in
using Enterprise cloud is beneficial.
Discover techniques to summarize the characteristics of your data
using PyPlot, NumPy, SciPy, and pandas Key Features Understand the
fundamental concepts of exploratory data analysis using Python Find
missing values in your data and identify the correlation between
different variables Practice graphical exploratory analysis
techniques using Matplotlib and the Seaborn Python package Book
DescriptionExploratory Data Analysis (EDA) is an approach to data
analysis that involves the application of diverse techniques to
gain insights into a dataset. This book will help you gain
practical knowledge of the main pillars of EDA - data cleaning,
data preparation, data exploration, and data visualization. You'll
start by performing EDA using open source datasets and perform
simple to advanced analyses to turn data into meaningful insights.
You'll then learn various descriptive statistical techniques to
describe the basic characteristics of data and progress to
performing EDA on time-series data. As you advance, you'll learn
how to implement EDA techniques for model development and
evaluation and build predictive models to visualize results. Using
Python for data analysis, you'll work with real-world datasets,
understand data, summarize its characteristics, and visualize it
for business intelligence. By the end of this EDA book, you'll have
developed the skills required to carry out a preliminary
investigation on any dataset, yield insights into data, present
your results with visual aids, and build a model that correctly
predicts future outcomes. What you will learn Import, clean, and
explore data to perform preliminary analysis using powerful Python
packages Identify and transform erroneous data using different data
wrangling techniques Explore the use of multiple regression to
describe non-linear relationships Discover hypothesis testing and
explore techniques of time-series analysis Understand and interpret
results obtained from graphical analysis Build, train, and optimize
predictive models to estimate results Perform complex EDA
techniques on open source datasets Who this book is forThis EDA
book is for anyone interested in data analysis, especially
students, statisticians, data analysts, and data scientists. The
practical concepts presented in this book can be applied in various
disciplines to enhance decision-making processes with data analysis
and synthesis. Fundamental knowledge of Python programming and
statistical concepts is all you need to get started with this book.
Social Network Analysis: Methods and Examples prepares social
science students to conduct their own social network analysis (SNA)
by covering basic methodological tools along with illustrative
examples from various fields. This innovative book takes a
conceptual rather than a mathematical approach as it discusses the
connection between what SNA methods have to offer and how those
methods are used in research design, data collection, and analysis.
Four substantive applications chapters provide examples from
politics, work and organizations, mental and physical health, and
crime and terrorism studies.
Dieser Buchtitel ist Teil des Digitalisierungsprojekts Springer
Book Archives mit Publikationen, die seit den Anfangen des Verlags
von 1842 erschienen sind. Der Verlag stellt mit diesem Archiv
Quellen fur die historische wie auch die disziplingeschichtliche
Forschung zur Verfugung, die jeweils im historischen Kontext
betrachtet werden mussen. Dieser Titel erschien in der Zeit vor
1945 und wird daher in seiner zeittypischen politisch-ideologischen
Ausrichtung vom Verlag nicht beworben.
Work through practical recipes to learn how to solve complex
machine learning and deep learning problems using Python Key
Features Get up and running with artificial intelligence in no time
using hands-on problem-solving recipes Explore popular Python
libraries and tools to build AI solutions for images, text, sounds,
and images Implement NLP, reinforcement learning, deep learning,
GANs, Monte-Carlo tree search, and much more Book
DescriptionArtificial intelligence (AI) plays an integral role in
automating problem-solving. This involves predicting and
classifying data and training agents to execute tasks successfully.
This book will teach you how to solve complex problems with the
help of independent and insightful recipes ranging from the
essentials to advanced methods that have just come out of research.
Artificial Intelligence with Python Cookbook starts by showing you
how to set up your Python environment and taking you through the
fundamentals of data exploration. Moving ahead, you'll be able to
implement heuristic search techniques and genetic algorithms. In
addition to this, you'll apply probabilistic models, constraint
optimization, and reinforcement learning. As you advance through
the book, you'll build deep learning models for text, images,
video, and audio, and then delve into algorithmic bias, style
transfer, music generation, and AI use cases in the healthcare and
insurance industries. Throughout the book, you'll learn about a
variety of tools for problem-solving and gain the knowledge needed
to effectively approach complex problems. By the end of this book
on AI, you will have the skills you need to write AI and machine
learning algorithms, test them, and deploy them for production.
What you will learn Implement data preprocessing steps and optimize
model hyperparameters Delve into representational learning with
adversarial autoencoders Use active learning, recommenders,
knowledge embedding, and SAT solvers Get to grips with
probabilistic modeling with TensorFlow probability Run object
detection, text-to-speech conversion, and text and music generation
Apply swarm algorithms, multi-agent systems, and graph networks Go
from proof of concept to production by deploying models as
microservices Understand how to use modern AI in practice Who this
book is forThis AI machine learning book is for Python developers,
data scientists, machine learning engineers, and deep learning
practitioners who want to learn how to build artificial
intelligence solutions with easy-to-follow recipes. You'll also
find this book useful if you're looking for state-of-the-art
solutions to perform different machine learning tasks in various
use cases. Basic working knowledge of the Python programming
language and machine learning concepts will help you to work with
code effectively in this book.
A beginner's guide to storing, managing, and analyzing data with
the updated features of Elastic 7.0 Key Features Gain access to new
features and updates introduced in Elastic Stack 7.0 Grasp the
fundamentals of Elastic Stack including Elasticsearch, Logstash,
and Kibana Explore useful tips for using Elastic Cloud and
deploying Elastic Stack in production environments Book
DescriptionThe Elastic Stack is a powerful combination of tools for
techniques such as distributed search, analytics, logging, and
visualization of data. Elastic Stack 7.0 encompasses new features
and capabilities that will enable you to find unique insights into
analytics using these techniques. This book will give you a
fundamental understanding of what the stack is all about, and help
you use it efficiently to build powerful real-time data processing
applications. The first few sections of the book will help you
understand how to set up the stack by installing tools, and
exploring their basic configurations. You'll then get up to speed
with using Elasticsearch for distributed searching and analytics,
Logstash for logging, and Kibana for data visualization. As you
work through the book, you will discover the technique of creating
custom plugins using Kibana and Beats. This is followed by coverage
of the Elastic X-Pack, a useful extension for effective security
and monitoring. You'll also find helpful tips on how to use Elastic
Cloud and deploy Elastic Stack in production environments. By the
end of this book, you'll be well versed with the fundamental
Elastic Stack functionalities and the role of each component in the
stack to solve different data processing problems. What you will
learn Install and configure an Elasticsearch architecture Solve the
full-text search problem with Elasticsearch Discover powerful
analytics capabilities through aggregations using Elasticsearch
Build a data pipeline to transfer data from a variety of sources
into Elasticsearch for analysis Create interactive dashboards for
effective storytelling with your data using Kibana Learn how to
secure, monitor and use Elastic Stack's alerting and reporting
capabilities Take applications to an on-premise or cloud-based
production environment with Elastic Stack Who this book is forThis
book is for entry-level data professionals, software engineers,
e-commerce developers, and full-stack developers who want to learn
about Elastic Stack and how the real-time processing and search
engine works for business analytics and enterprise search
applications. Previous experience with Elastic Stack is not
required, however knowledge of data warehousing and database
concepts will be helpful.
Learn through hands-on exercises covering a variety of topics
including data connections, analytics, and dashboards to
effectively prepare for the Tableau Desktop Certified Associate
exam Key Features Prepare for the Tableau Desktop Certified
Associate exam with the help of tips and techniques shared by
experts Implement Tableau's advanced analytical capabilities such
as forecasting Delve into advanced Tableau features and explore
best practices for building dashboards Book DescriptionThe Tableau
Desktop Certified Associate exam measures your knowledge of Tableau
Desktop and your ability to work with data and data visualization
techniques. This book will help you to become well-versed in
Tableau software and use its business intelligence (BI) features to
solve BI and analytics challenges. With the help of this book,
you'll explore the authors' success stories and their experience
with Tableau. You'll start by understanding the importance of
Tableau certification and the different certification exams, along
with covering the exam format, Tableau basics, and best practices
for preparing data for analysis and visualization. The book builds
on your knowledge of advanced Tableau topics such as table
calculations for solving problems. You'll learn to effectively
visualize geographic data using vector maps. Later, you'll discover
the analytics capabilities of Tableau by learning how to use
features such as forecasting. Finally, you'll understand how to
build and customize dashboards, while ensuring they convey
information effectively. Every chapter has examples and tests to
reinforce your learning, along with mock tests in the last section.
By the end of this book, you'll be able to efficiently prepare for
the certification exam with the help of mock tests, detailed
explanations, and expert advice from the authors. What you will
learn Apply Tableau best practices to analyze and visualize data
Use Tableau to visualize geographic data using vector maps Create
charts to gain productive insights into data and make
quality-driven decisions Implement advanced analytics techniques to
identify and forecast key values Prepare customized table
calculations to compute specific values Answer questions based on
the Tableau Desktop Certified Associate exam with the help of mock
tests Who this book is forThis Tableau certification book is for
business analysts, BI professionals, and data analysts who want to
become certified Tableau Desktop Associates and solve a range of
data science and business intelligence problems using this
example-packed guide. Some experience in Tableau Desktop is
expected to get the most out of this book.
An expert guide to implementing fast, secure, and scalable
decentralized applications that work with thousands of users in
real time Key Features Implement advanced features of the Ethereum
network to build powerful decentralized applications Build smart
contracts on different domains using the programming techniques of
Solidity and Vyper Explore the architecture of Ethereum network to
understand advanced use cases of blockchain development Book
DescriptionEthereum is one of the commonly used platforms for
building blockchain applications. It's a decentralized platform for
applications that can run exactly as programmed without being
affected by fraud, censorship, or third-party interference. This
book will give you a deep understanding of how blockchain works so
that you can discover the entire ecosystem, core components, and
its implementations. You will get started by understanding how to
configure and work with various Ethereum protocols for developing
dApps. Next, you will learn to code and create powerful smart
contracts that scale with Solidity and Vyper. You will then explore
the building blocks of the dApps architecture, and gain insights on
how to create your own dApp through a variety of real-world
examples. The book will even guide you on how to deploy your dApps
on multiple Ethereum instances with the required best practices and
techniques. The next few chapters will delve into advanced topics
such as, building advanced smart contracts and multi-page frontends
using Ethereum blockchain. You will also focus on implementing
machine learning techniques to build decentralized autonomous
applications, in addition to covering several use cases across a
variety of domains such as, social media and e-commerce. By the end
of this book, you will have the expertise you need to build
decentralized autonomous applications confidently. What you will
learn Apply scalability solutions on dApps with Plasma and state
channels Understand the important metrics of blockchain for
analyzing and determining its state Develop a decentralized web
application using React.js and Node.js Create oracles with Node.js
to provide external data to smart contracts Get to grips with using
Etherscan and block explorers for various transactions Explore
web3.js, Solidity, and Vyper for dApps communication Deploy apps
with multiple Ethereum instances including TestRPC, private chain,
test chain, and mainnet Who this book is forThis book is for anyone
who wants to build fast, highly secure, and transactional
decentralized applications. If you are an Ethereum developer
looking to perfect your existing skills in building powerful
blockchain applications, then this book is for you. Basic knowledge
of Ethereum and blockchain is necessary to understand the concepts
covered in this book.
Understand, design, and create cognitive applications using
Watson's suite of APIs. Key Features Develop your skills and work
with IBM Watson APIs to build efficient and powerful cognitive apps
Learn how to build smart apps to carry out different sets of
activities using real-world use cases Get well versed with the best
practices of IBM Watson and implement them in your daily work Book
DescriptionCognitive computing is rapidly infusing every aspect of
our lives riding on three important fields: data science, machine
learning (ML), and artificial intelligence (AI). It allows
computing systems to learn and keep on improving as the amount of
data in the system grows. This book introduces readers to a whole
new paradigm of computing - a paradigm that is totally different
from the conventional computing of the Information Age. You will
learn the concepts of ML, deep learning (DL), neural networks, and
AI through the set of APIs provided by IBM Watson. This book will
help you build your own applications to understand, plan, and solve
problems, and analyze them as per your needs. You will learn about
various domains of cognitive computing, such as NLP, voice
processing, computer vision, emotion analytics, and conversational
systems, using different IBM Watson APIs. From this, the reader
will learn what ML is, and what goes on in the background to make
computers "do their magic," as well as where these concepts have
been applied. Having achieved this, the readers will then be able
to embark on their journey of learning, researching, and applying
the concept in their respective fields. What you will learn Get
well versed with the APIs provided by IBM Watson on IBM Cloud Learn
ML, AI, cognitive computing, and neural network principles
Implement smart applications in fields such as healthcare,
entertainment, security, and more Understand unstructured content
using cognitive metadata with the help of Natural Language
Understanding Use Watson's APIs to create real-life applications to
realize their capabilities Delve into various domains of cognitive
computing, such as media analytics, embedded deep learning,
computer vision, and more Who this book is forThis book is for
beginners and novices; having some knowledge about artificial
intelligence and deep learning is an advantage, but not a
prerequisite to benefit from this book. We explain the concept of
deep learning and artificial intelligence through the set of tools
IBM Watson provides.
|
|