|
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Leverage the full potential of SAS to get unique, actionable
insights from your data Key Features Build enterprise-class data
solutions using SAS and become well-versed in SAS programming Work
with different data structures, and run SQL queries to manipulate
your data Explore essential concepts and techniques with practical
examples to confidently pass the SAS certification exam Book
DescriptionSAS is one of the leading enterprise tools in the world
today when it comes to data management and analysis. It enables the
fast and easy processing of data and helps you gain valuable
business insights for effective decision-making. This book will
serve as a comprehensive guide that will prepare you for the SAS
certification exam. After a quick overview of the SAS architecture
and components, the book will take you through the different
approaches to importing and reading data from different sources
using SAS. You will then cover SAS Base and 4GL, understanding data
management and analysis, along with exploring SAS functions for
data manipulation and transformation. Next, you'll discover SQL
procedures and get up to speed on creating and validating queries.
In the concluding chapters, you'll learn all about data
visualization, right from creating bar charts and sample geographic
maps through to assigning patterns and formats. In addition to
this, the book will focus on macro programming and its advanced
aspects. By the end of this book, you will be well versed in SAS
programming and have the skills you need to easily handle and
manage your data-related problems in SAS. What you will learn
Explore a variety of SAS modules and packages for efficient data
analysis Use SAS 4GL functions to manipulate, merge, sort, and
transform data Gain useful insights into advanced PROC SQL options
in SAS to interact with data Get to grips with SAS Macro and define
your own macros to share data Discover the different graphical
libraries to shape and visualize data with Apply the SAS Output
Delivery System to prepare detailed reports Who this book is
forBudding or experienced data professionals who want to get
started with SAS will benefit from this book. Those looking to
prepare for the SAS certification exam will also find this book to
be a useful resource. Some understanding of basic data management
concepts will help you get the most out of this book.
One-stop solution for NLP practitioners, ML developers, and data
scientists to build effective NLP systems that can perform
real-world complicated tasks Key Features Apply deep learning
algorithms and techniques such as BiLSTMS, CRFs, BPE and more using
TensorFlow 2 Explore applications like text generation,
summarization, weakly supervised labelling and more Read cutting
edge material with seminal papers provided in the GitHub repository
with full working code Book DescriptionRecently, there have been
tremendous advances in NLP, and we are now moving from research
labs into practical applications. This book comes with a perfect
blend of both the theoretical and practical aspects of trending and
complex NLP techniques. The book is focused on innovative
applications in the field of NLP, language generation, and dialogue
systems. It helps you apply the concepts of pre-processing text
using techniques such as tokenization, parts of speech tagging, and
lemmatization using popular libraries such as Stanford NLP and
SpaCy. You will build Named Entity Recognition (NER) from scratch
using Conditional Random Fields and Viterbi Decoding on top of
RNNs. The book covers key emerging areas such as generating text
for use in sentence completion and text summarization, bridging
images and text by generating captions for images, and managing
dialogue aspects of chatbots. You will learn how to apply transfer
learning and fine-tuning using TensorFlow 2. Further, it covers
practical techniques that can simplify the labelling of textual
data. The book also has a working code that is adaptable to your
use cases for each tech piece. By the end of the book, you will
have an advanced knowledge of the tools, techniques and deep
learning architecture used to solve complex NLP problems. What you
will learn Grasp important pre-steps in building NLP applications
like POS tagging Use transfer and weakly supervised learning using
libraries like Snorkel Do sentiment analysis using BERT Apply
encoder-decoder NN architectures and beam search for summarizing
texts Use Transformer models with attention to bring images and
text together Build apps that generate captions and answer
questions about images using custom Transformers Use advanced
TensorFlow techniques like learning rate annealing, custom layers,
and custom loss functions to build the latest DeepNLP models Who
this book is forThis is not an introductory book and assumes the
reader is familiar with basics of NLP and has fundamental Python
skills, as well as basic knowledge of machine learning and
undergraduate-level calculus and linear algebra. The readers who
can benefit the most from this book include intermediate ML
developers who are familiar with the basics of supervised learning
and deep learning techniques and professionals who already use
TensorFlow/Python for purposes such as data science, ML, research,
analysis, etc.
Learn how to gain insights from your data as well as machine
learning and become a presentation pro who can create interactive
dashboards Key Features Enhance your presentation skills by
implementing engaging data storytelling and visualization
techniques Learn the basics of machine learning and easily apply
machine learning models to your data Improve productivity by
automating your data processes Book DescriptionData Analytics Made
Easy is an accessible beginner's guide for anyone working with
data. The book interweaves four key elements: Data visualizations
and storytelling - Tired of people not listening to you and
ignoring your results? Don't worry; chapters 7 and 8 show you how
to enhance your presentations and engage with your managers and
co-workers. Learn to create focused content with a well-structured
story behind it to captivate your audience. Automating your data
workflows - Improve your productivity by automating your data
analysis. This book introduces you to the open-source platform,
KNIME Analytics Platform. You'll see how to use this no-code and
free-to-use software to create a KNIME workflow of your data
processes just by clicking and dragging components. Machine
learning - Data Analytics Made Easy describes popular machine
learning approaches in a simplified and visual way before
implementing these machine learning models using KNIME. You'll not
only be able to understand data scientists' machine learning
models; you'll be able to challenge them and build your own.
Creating interactive dashboards - Follow the book's simple
methodology to create professional-looking dashboards using
Microsoft Power BI, giving users the capability to slice and dice
data and drill down into the results. What you will learn
Understand the potential of data and its impact on your business
Import, clean, transform, combine data feeds, and automate your
processes Influence business decisions by learning to create
engaging presentations Build real-world models to improve
profitability, create customer segmentation, automate and improve
data reporting, and more Create professional-looking and
business-centric visuals and dashboards Open the lid on the black
box of AI and learn about and implement supervised and unsupervised
machine learning models Who this book is forThis book is for
beginners who work with data and those who need to know how to
interpret their business/customer data. The book also covers the
high-level concepts of data workflows, machine learning, data
storytelling, and visualizations, which are useful for managers. No
previous math, statistics, or computer science knowledge is
required.
Get to grips with building and productionizing end-to-end big data
solutions in Azure and learn best practices for working with large
datasets Key Features Integrate with Azure Synapse Analytics,
Cosmos DB, and Azure HDInsight Kafka Cluster to scale and analyze
your projects and build pipelines Use Databricks SQL to run ad hoc
queries on your data lake and create dashboards Productionize a
solution using CI/CD for deploying notebooks and Azure Databricks
Service to various environments Book DescriptionAzure Databricks is
a unified collaborative platform for performing scalable analytics
in an interactive environment. The Azure Databricks Cookbook
provides recipes to get hands-on with the analytics process,
including ingesting data from various batch and streaming sources
and building a modern data warehouse. The book starts by teaching
you how to create an Azure Databricks instance within the Azure
portal, Azure CLI, and ARM templates. You'll work through clusters
in Databricks and explore recipes for ingesting data from sources,
including files, databases, and streaming sources such as Apache
Kafka and EventHub. The book will help you explore all the features
supported by Azure Databricks for building powerful end-to-end data
pipelines. You'll also find out how to build a modern data
warehouse by using Delta tables and Azure Synapse Analytics. Later,
you'll learn how to write ad hoc queries and extract meaningful
insights from the data lake by creating visualizations and
dashboards with Databricks SQL. Finally, you'll deploy and
productionize a data pipeline as well as deploy notebooks and Azure
Databricks service using continuous integration and continuous
delivery (CI/CD). By the end of this Azure book, you'll be able to
use Azure Databricks to streamline different processes involved in
building data-driven apps. What you will learn Read and write data
from and to various Azure resources and file formats Build a modern
data warehouse with Delta Tables and Azure Synapse Analytics
Explore jobs, stages, and tasks and see how Spark lazy evaluation
works Handle concurrent transactions and learn performance
optimization in Delta tables Learn Databricks SQL and create
real-time dashboards in Databricks SQL Integrate Azure DevOps for
version control, deploying, and productionizing solutions with
CI/CD pipelines Discover how to use RBAC and ACLs to restrict data
access Build end-to-end data processing pipeline for near real-time
data analytics Who this book is forThis recipe-based book is for
data scientists, data engineers, big data professionals, and
machine learning engineers who want to perform data analytics on
their applications. Prior experience of working with Apache Spark
and Azure is necessary to get the most out of this book.
Explore and implement deep learning to solve various real-world
problems using modern R libraries such as TensorFlow, MXNet, H2O,
and Deepnet Key Features Understand deep learning algorithms and
architectures using R and determine which algorithm is best suited
for a specific problem Improve models using parameter tuning,
feature engineering, and ensembling Apply advanced neural network
models such as deep autoencoders and generative adversarial
networks (GANs) across different domains Book DescriptionDeep
learning enables efficient and accurate learning from a massive
amount of data. This book will help you overcome a number of
challenges using various deep learning algorithms and architectures
with R programming. This book starts with a brief overview of
machine learning and deep learning and how to build your first
neural network. You'll understand the architecture of various deep
learning algorithms and their applicable fields, learn how to build
deep learning models, optimize hyperparameters, and evaluate model
performance. Various deep learning applications in image
processing, natural language processing (NLP), recommendation
systems, and predictive analytics will also be covered. Later
chapters will show you how to tackle recognition problems such as
image recognition and signal detection, programmatically summarize
documents, conduct topic modeling, and forecast stock market
prices. Toward the end of the book, you will learn the common
applications of GANs and how to build a face generation model using
them. Finally, you'll get to grips with using reinforcement
learning and deep reinforcement learning to solve various
real-world problems. By the end of this deep learning book, you
will be able to build and deploy your own deep learning
applications using appropriate frameworks and algorithms. What you
will learn Design a feedforward neural network to see how the
activation function computes an output Create an image recognition
model using convolutional neural networks (CNNs) Prepare data,
decide hidden layers and neurons and train your model with the
backpropagation algorithm Apply text cleaning techniques to remove
uninformative text using NLP Build, train, and evaluate a GAN model
for face generation Understand the concept and implementation of
reinforcement learning in R Who this book is forThis book is for
data scientists, machine learning engineers, and deep learning
developers who are familiar with machine learning and are looking
to enhance their knowledge of deep learning using practical
examples. Anyone interested in increasing the efficiency of their
machine learning applications and exploring various options in R
will also find this book useful. Basic knowledge of machine
learning techniques and working knowledge of the R programming
language is expected.
Kickstart your NLP journey by exploring BERT and its variants such
as ALBERT, RoBERTa, DistilBERT, VideoBERT, and more with Hugging
Face's transformers library Key Features Explore the encoder and
decoder of the transformer model Become well-versed with BERT along
with ALBERT, RoBERTa, and DistilBERT Discover how to pre-train and
fine-tune BERT models for several NLP tasks Book DescriptionBERT
(bidirectional encoder representations from transformer) has
revolutionized the world of natural language processing (NLP) with
promising results. This book is an introductory guide that will
help you get to grips with Google's BERT architecture. With a
detailed explanation of the transformer architecture, this book
will help you understand how the transformer's encoder and decoder
work. You'll explore the BERT architecture by learning how the BERT
model is pre-trained and how to use pre-trained BERT for downstream
tasks by fine-tuning it for NLP tasks such as sentiment analysis
and text summarization with the Hugging Face transformers library.
As you advance, you'll learn about different variants of BERT such
as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is
used for NLP tasks like question answering. You'll also cover
simpler and faster BERT variants based on knowledge distillation
such as DistilBERT and TinyBERT. The book takes you through MBERT,
XLM, and XLM-R in detail and then introduces you to sentence-BERT,
which is used for obtaining sentence representation. Finally,
you'll discover domain-specific BERT models such as BioBERT and
ClinicalBERT, and discover an interesting variant called VideoBERT.
By the end of this BERT book, you'll be well-versed with using BERT
and its variants for performing practical NLP tasks. What you will
learn Understand the transformer model from the ground up Find out
how BERT works and pre-train it using masked language model (MLM)
and next sentence prediction (NSP) tasks Get hands-on with BERT by
learning to generate contextual word and sentence embeddings
Fine-tune BERT for downstream tasks Get to grips with ALBERT,
RoBERTa, ELECTRA, and SpanBERT models Get the hang of the BERT
models based on knowledge distillation Understand cross-lingual
models such as XLM and XLM-R Explore Sentence-BERT, VideoBERT, and
BART Who this book is forThis book is for NLP professionals and
data scientists looking to simplify NLP tasks to enable efficient
language understanding using BERT. A basic understanding of NLP
concepts and deep learning is required to get the best out of this
book.
Big Data analytics is the complex process of examining big data to
uncover information such as correlations, hidden patterns, trends
and user and customer preferences, to allow organizations and
businesses to make more informed decisions. These methods and
technologies have become ubiquitous in all fields of science,
engineering, business and management due to the rise of data-driven
models as well as data engineering developments using parallel and
distributed computational analytics frameworks, data and algorithm
parallelization, and GPGPU programming. However, there remain
potential issues that need to be addressed to enable big data
processing and analytics in real time. In the first volume of this
comprehensive two-volume handbook, the authors present several
methodologies to support Big Data analytics including database
management, processing frameworks and architectures, data lakes,
query optimization strategies, towards real-time data processing,
data stream analytics, Fog and Edge computing, and Artificial
Intelligence and Big Data. The second volume is dedicated to a wide
range of applications in secure data storage, privacy-preserving,
Software Defined Networks (SDN), Internet of Things (IoTs),
behaviour analytics, traffic predictions, gender based
classification on e-commerce data, recommender systems, Big Data
regression with Apache Spark, visual sentiment analysis, wavelet
Neural Network via GPU, stock market movement predictions, and
financial reporting. The two-volume work is aimed at providing a
unique platform for researchers, engineers, developers, educators
and advanced students in the field of Big Data analytics.
Big Data analytics is the complex process of examining big data to
uncover information such as correlations, hidden patterns, trends
and user and customer preferences, to allow organizations and
businesses to make more informed decisions. These methods and
technologies have become ubiquitous in all fields of science,
engineering, business and management due to the rise of data-driven
models as well as data engineering developments using parallel and
distributed computational analytics frameworks, data and algorithm
parallelization, and GPGPU programming. However, there remain
potential issues that need to be addressed to enable big data
processing and analytics in real time. In the first volume of this
comprehensive two-volume handbook, the authors present several
methodologies to support Big Data analytics including database
management, processing frameworks and architectures, data lakes,
query optimization strategies, towards real-time data processing,
data stream analytics, Fog and Edge computing, and Artificial
Intelligence and Big Data. The second volume is dedicated to a wide
range of applications in secure data storage, privacy-preserving,
Software Defined Networks (SDN), Internet of Things (IoTs),
behaviour analytics, traffic predictions, gender based
classification on e-commerce data, recommender systems, Big Data
regression with Apache Spark, visual sentiment analysis, wavelet
Neural Network via GPU, stock market movement predictions, and
financial reporting. The two-volume work is aimed at providing a
unique platform for researchers, engineers, developers, educators
and advanced students in the field of Big Data analytics.
Get to grips with automated machine learning and adopt a hands-on
approach to AutoML implementation and associated methodologies Key
Features Get up to speed with AutoML using OSS, Azure, AWS, GCP, or
any platform of your choice Eliminate mundane tasks in data
engineering and reduce human errors in machine learning models Find
out how you can make machine learning accessible for all users to
promote decentralized processes Book DescriptionEvery machine
learning engineer deals with systems that have hyperparameters, and
the most basic task in automated machine learning (AutoML) is to
automatically set these hyperparameters to optimize performance.
The latest deep neural networks have a wide range of
hyperparameters for their architecture, regularization, and
optimization, which can be customized effectively to save time and
effort. This book reviews the underlying techniques of automated
feature engineering, model and hyperparameter tuning,
gradient-based approaches, and much more. You'll discover different
ways of implementing these techniques in open source tools and then
learn to use enterprise tools for implementing AutoML in three
major cloud service providers: Microsoft Azure, Amazon Web Services
(AWS), and Google Cloud Platform. As you progress, you'll explore
the features of cloud AutoML platforms by building machine learning
models using AutoML. The book will also show you how to develop
accurate models by automating time-consuming and repetitive tasks
in the machine learning development lifecycle. By the end of this
machine learning book, you'll be able to build and deploy AutoML
models that are not only accurate, but also increase productivity,
allow interoperability, and minimize feature engineering tasks.
What you will learn Explore AutoML fundamentals, underlying
methods, and techniques Assess AutoML aspects such as algorithm
selection, auto featurization, and hyperparameter tuning in an
applied scenario Find out the difference between cloud and
operations support systems (OSS) Implement AutoML in enterprise
cloud to deploy ML models and pipelines Build explainable AutoML
pipelines with transparency Understand automated feature
engineering and time series forecasting Automate data science
modeling tasks to implement ML solutions easily and focus on more
complex problems Who this book is forCitizen data scientists,
machine learning developers, artificial intelligence enthusiasts,
or anyone looking to automatically build machine learning models
using the features offered by open source tools, Microsoft Azure
Machine Learning, AWS, and Google Cloud Platform will find this
book useful. Beginner-level knowledge of building ML models is
required to get the best out of this book. Prior experience in
using Enterprise cloud is beneficial.
Leverage the Azure analytics platform's key analytics services to
deliver unmatched intelligence for your data Key Features Learn to
ingest, prepare, manage, and serve data for immediate business
requirements Bring enterprise data warehousing and big data
analytics together to gain insights from your data Develop
end-to-end analytics solutions using Azure Synapse Book
DescriptionAzure Synapse Analytics, which Microsoft describes as
the next evolution of Azure SQL Data Warehouse, is a limitless
analytics service that brings enterprise data warehousing and big
data analytics together. With this book, you'll learn how to
discover insights from your data effectively using this platform.
The book starts with an overview of Azure Synapse Analytics, its
architecture, and how it can be used to improve business
intelligence and machine learning capabilities. Next, you'll go on
to choose and set up the correct environment for your business
problem. You'll also learn a variety of ways to ingest data from
various sources and orchestrate the data using transformation
techniques offered by Azure Synapse. Later, you'll explore how to
handle both relational and non-relational data using the SQL
language. As you progress, you'll perform real-time streaming and
execute data analysis operations on your data using various
languages, before going on to apply ML techniques to derive
accurate and granular insights from data. Finally, you'll discover
how to protect sensitive data in real time by using security and
privacy features. By the end of this Azure book, you'll be able to
build end-to-end analytics solutions while focusing on data prep,
data management, data warehousing, and AI tasks. What you will
learn Explore the necessary considerations for data ingestion and
orchestration while building analytical pipelines Understand
pipelines and activities in Synapse pipelines and use them to
construct end-to-end data-driven workflows Query data using various
coding languages on Azure Synapse Focus on Synapse SQL and Synapse
Spark Manage and monitor resource utilization and query activity in
Azure Synapse Connect Power BI workspaces with Azure Synapse and
create or modify reports directly from Synapse Studio Create and
manage IP firewall rules in Azure Synapse Who this book is forThis
book is for data architects, data scientists, data engineers, and
business analysts who are looking to get up and running with the
Azure Synapse Analytics platform. Basic knowledge of data
warehousing will be beneficial to help you understand the concepts
covered in this book more effectively.
Think about your data intelligently and ask the right questions Key
Features Master data cleaning techniques necessary to perform
real-world data science and machine learning tasks Spot common
problems with dirty data and develop flexible solutions from first
principles Test and refine your newly acquired skills through
detailed exercises at the end of each chapter Book DescriptionData
cleaning is the all-important first step to successful data
science, data analysis, and machine learning. If you work with any
kind of data, this book is your go-to resource, arming you with the
insights and heuristics experienced data scientists had to learn
the hard way. In a light-hearted and engaging exploration of
different tools, techniques, and datasets real and fictitious,
Python veteran David Mertz teaches you the ins and outs of data
preparation and the essential questions you should be asking of
every piece of data you work with. Using a mixture of Python, R,
and common command-line tools, Cleaning Data for Effective Data
Science follows the data cleaning pipeline from start to end,
focusing on helping you understand the principles underlying each
step of the process. You'll look at data ingestion of a vast range
of tabular, hierarchical, and other data formats, impute missing
values, detect unreliable data and statistical anomalies, and
generate synthetic features. The long-form exercises at the end of
each chapter let you get hands-on with the skills you've acquired
along the way, also providing a valuable resource for academic
courses. What you will learn Ingest and work with common data
formats like JSON, CSV, SQL and NoSQL databases, PDF, and binary
serialized data structures Understand how and why we use tools such
as pandas, SciPy, scikit-learn, Tidyverse, and Bash Apply useful
rules and heuristics for assessing data quality and detecting bias,
like Benford's law and the 68-95-99.7 rule Identify and handle
unreliable data and outliers, examining z-score and other
statistical properties Impute sensible values into missing data and
use sampling to fix imbalances Use dimensionality reduction,
quantization, one-hot encoding, and other feature engineering
techniques to draw out patterns in your data Work carefully with
time series data, performing de-trending and interpolation Who this
book is forThis book is designed to benefit software developers,
data scientists, aspiring data scientists, teachers, and students
who work with data. If you want to improve your rigor in data
hygiene or are looking for a refresher, this book is for you. Basic
familiarity with statistics, general concepts in machine learning,
knowledge of a programming language (Python or R), and some
exposure to data science are helpful.
Reinforce your understanding of data science and data analysis from
a statistical perspective to extract meaningful insights from your
data using Python programming Key Features Work your way through
the entire data analysis pipeline with statistics concerns in mind
to make reasonable decisions Understand how various data science
algorithms function Build a solid foundation in statistics for data
science and machine learning using Python-based examples Book
DescriptionStatistics remain the backbone of modern analysis tasks,
helping you to interpret the results produced by data science
pipelines. This book is a detailed guide covering the math and
various statistical methods required for undertaking data science
tasks. The book starts by showing you how to preprocess data and
inspect distributions and correlations from a statistical
perspective. You'll then get to grips with the fundamentals of
statistical analysis and apply its concepts to real-world datasets.
As you advance, you'll find out how statistical concepts emerge
from different stages of data science pipelines, understand the
summary of datasets in the language of statistics, and use it to
build a solid foundation for robust data products such as
explanatory models and predictive models. Once you've uncovered the
working mechanism of data science algorithms, you'll cover
essential concepts for efficient data collection, cleaning, mining,
visualization, and analysis. Finally, you'll implement statistical
methods in key machine learning tasks such as classification,
regression, tree-based methods, and ensemble learning. By the end
of this Essential Statistics for Non-STEM Data Analysts book,
you'll have learned how to build and present a self-contained,
statistics-backed data product to meet your business goals. What
you will learn Find out how to grab and load data into an analysis
environment Perform descriptive analysis to extract meaningful
summaries from data Discover probability, parameter estimation,
hypothesis tests, and experiment design best practices Get to grips
with resampling and bootstrapping in Python Delve into statistical
tests with variance analysis, time series analysis, and A/B test
examples Understand the statistics behind popular machine learning
algorithms Answer questions on statistics for data scientist
interviews Who this book is forThis book is an entry-level guide
for data science enthusiasts, data analysts, and anyone starting
out in the field of data science and looking to learn the essential
statistical concepts with the help of simple explanations and
examples. If you're a developer or student with a non-mathematical
background, you'll find this book useful. Working knowledge of the
Python programming language is required.
Get to grips with pandas by working with real datasets and master
data discovery, data manipulation, data preparation, and handling
data for analytical tasks Key Features Perform efficient data
analysis and manipulation tasks using pandas 1.x Apply pandas to
different real-world domains with the help of step-by-step examples
Make the most of pandas as an effective data exploration tool Book
DescriptionExtracting valuable business insights is no longer a
'nice-to-have', but an essential skill for anyone who handles data
in their enterprise. Hands-On Data Analysis with Pandas is here to
help beginners and those who are migrating their skills into data
science get up to speed in no time. This book will show you how to
analyze your data, get started with machine learning, and work
effectively with the Python libraries often used for data science,
such as pandas, NumPy, matplotlib, seaborn, and scikit-learn. Using
real-world datasets, you will learn how to use the pandas library
to perform data wrangling to reshape, clean, and aggregate your
data. Then, you will learn how to conduct exploratory data analysis
by calculating summary statistics and visualizing the data to find
patterns. In the concluding chapters, you will explore some
applications of anomaly detection, regression, clustering, and
classification using scikit-learn to make predictions based on past
data. This updated edition will equip you with the skills you need
to use pandas 1.x to efficiently perform various data manipulation
tasks, reliably reproduce analyses, and visualize your data for
effective decision making - valuable knowledge that can be applied
across multiple domains. What you will learn Understand how data
analysts and scientists gather and analyze data Perform data
analysis and data wrangling using Python Combine, group, and
aggregate data from multiple sources Create data visualizations
with pandas, matplotlib, and seaborn Apply machine learning
algorithms to identify patterns and make predictions Use Python
data science libraries to analyze real-world datasets Solve common
data representation and analysis problems using pandas Build Python
scripts, modules, and packages for reusable analysis code Who this
book is forThis book is for data science beginners, data analysts,
and Python developers who want to explore each stage of data
analysis and scientific computing using a wide range of datasets.
Data scientists looking to implement pandas in their machine
learning workflow will also find plenty of valuable know-how as
they progress. You'll find it easier to follow along with this book
if you have a working knowledge of the Python programming language,
but a Python crash-course tutorial is provided in the code bundle
for anyone who needs a refresher.
Dieser Buchtitel ist Teil des Digitalisierungsprojekts Springer
Book Archives mit Publikationen, die seit den Anfangen des Verlags
von 1842 erschienen sind. Der Verlag stellt mit diesem Archiv
Quellen fur die historische wie auch die disziplingeschichtliche
Forschung zur Verfugung, die jeweils im historischen Kontext
betrachtet werden mussen. Dieser Titel erschien in der Zeit vor
1945 und wird daher in seiner zeittypischen politisch-ideologischen
Ausrichtung vom Verlag nicht beworben.
A practical blockchain handbook designed to take you through
implementing and re-engineering banking and financial solutions and
workflows using eight step-by-step projects Key Features Implement
various end-to-end blockchain projects and learn to enhance
present-day financial solutions Use Ethereum, Hyperledger, and
Stellar to build public and private decentralized applications
Address complex challenges faced in the BFSI domain using different
blockchain platform services Book DescriptionBlockchain technology
will continue to play an integral role in the banking and finance
sector in the coming years. It will enable enterprises to build
transparent and secure business processes. Experts estimate annual
savings of up to 20 billion dollars from this technology. This book
will help you build financial apps using blockchain, guiding you
through enhancing popular products and services in the banking and
finance sector. The book starts by explaining the essential
concepts of blockchain, and the impact of blockchain technology on
the BFSI sector. Next, you'll delve into re-designing existing
banking processes and building new financial apps using blockchain.
To accomplish this, you'll work through eight blockchain projects.
By demonstrating the entire process, the book helps you understand
everything from setting up the environment and building frontend
portals to system integration and testing apps. You will gain
hands-on experience with the Ethereum, Hyperledger Fabric, and
Stellar to develop private and public decentralized apps. Finally,
you'll learn how to use ancillary platforms and frameworks such as
IPFS, Truffle OpenZeppelin, and MetaMask. By the end of this
blockchain book, you'll have an in-depth understanding of how to
leverage distributed ledgers and smart contracts for financial use
cases. What you will learn Design and implement blockchain
solutions in a BFSI organization Explore common architectures and
implementation models for enterprise blockchain Design blockchain
wallets for multi-purpose applications using Ethereum Build secure
and fast decentralized trading ecosystems with Blockchain Implement
smart contracts to build secure process workflows in Ethereum and
Hyperledger Fabric Use the Stellar platform to build KYC and
AML-compliant remittance workflows Map complex business workflows
and automate backend processes in a blockchain architecture Who
this book is forThis book is for blockchain and Dapps developers,
or anyone looking for a guide to building innovative and highly
secure solutions in the fintech domain using real-world use cases.
Developers working in financial enterprises and banks, and solution
architects looking to build brand new process flows using
blockchain technology will also find the book useful. Experience
with Solidity programming and prior knowledge of finance and trade
are required to get the most out of this book.
Learn through hands-on exercises covering a variety of topics
including data connections, analytics, and dashboards to
effectively prepare for the Tableau Desktop Certified Associate
exam Key Features Prepare for the Tableau Desktop Certified
Associate exam with the help of tips and techniques shared by
experts Implement Tableau's advanced analytical capabilities such
as forecasting Delve into advanced Tableau features and explore
best practices for building dashboards Book DescriptionThe Tableau
Desktop Certified Associate exam measures your knowledge of Tableau
Desktop and your ability to work with data and data visualization
techniques. This book will help you to become well-versed in
Tableau software and use its business intelligence (BI) features to
solve BI and analytics challenges. With the help of this book,
you'll explore the authors' success stories and their experience
with Tableau. You'll start by understanding the importance of
Tableau certification and the different certification exams, along
with covering the exam format, Tableau basics, and best practices
for preparing data for analysis and visualization. The book builds
on your knowledge of advanced Tableau topics such as table
calculations for solving problems. You'll learn to effectively
visualize geographic data using vector maps. Later, you'll discover
the analytics capabilities of Tableau by learning how to use
features such as forecasting. Finally, you'll understand how to
build and customize dashboards, while ensuring they convey
information effectively. Every chapter has examples and tests to
reinforce your learning, along with mock tests in the last section.
By the end of this book, you'll be able to efficiently prepare for
the certification exam with the help of mock tests, detailed
explanations, and expert advice from the authors. What you will
learn Apply Tableau best practices to analyze and visualize data
Use Tableau to visualize geographic data using vector maps Create
charts to gain productive insights into data and make
quality-driven decisions Implement advanced analytics techniques to
identify and forecast key values Prepare customized table
calculations to compute specific values Answer questions based on
the Tableau Desktop Certified Associate exam with the help of mock
tests Who this book is forThis Tableau certification book is for
business analysts, BI professionals, and data analysts who want to
become certified Tableau Desktop Associates and solve a range of
data science and business intelligence problems using this
example-packed guide. Some experience in Tableau Desktop is
expected to get the most out of this book.
A beginner's guide to storing, managing, and analyzing data with
the updated features of Elastic 7.0 Key Features Gain access to new
features and updates introduced in Elastic Stack 7.0 Grasp the
fundamentals of Elastic Stack including Elasticsearch, Logstash,
and Kibana Explore useful tips for using Elastic Cloud and
deploying Elastic Stack in production environments Book
DescriptionThe Elastic Stack is a powerful combination of tools for
techniques such as distributed search, analytics, logging, and
visualization of data. Elastic Stack 7.0 encompasses new features
and capabilities that will enable you to find unique insights into
analytics using these techniques. This book will give you a
fundamental understanding of what the stack is all about, and help
you use it efficiently to build powerful real-time data processing
applications. The first few sections of the book will help you
understand how to set up the stack by installing tools, and
exploring their basic configurations. You'll then get up to speed
with using Elasticsearch for distributed searching and analytics,
Logstash for logging, and Kibana for data visualization. As you
work through the book, you will discover the technique of creating
custom plugins using Kibana and Beats. This is followed by coverage
of the Elastic X-Pack, a useful extension for effective security
and monitoring. You'll also find helpful tips on how to use Elastic
Cloud and deploy Elastic Stack in production environments. By the
end of this book, you'll be well versed with the fundamental
Elastic Stack functionalities and the role of each component in the
stack to solve different data processing problems. What you will
learn Install and configure an Elasticsearch architecture Solve the
full-text search problem with Elasticsearch Discover powerful
analytics capabilities through aggregations using Elasticsearch
Build a data pipeline to transfer data from a variety of sources
into Elasticsearch for analysis Create interactive dashboards for
effective storytelling with your data using Kibana Learn how to
secure, monitor and use Elastic Stack's alerting and reporting
capabilities Take applications to an on-premise or cloud-based
production environment with Elastic Stack Who this book is forThis
book is for entry-level data professionals, software engineers,
e-commerce developers, and full-stack developers who want to learn
about Elastic Stack and how the real-time processing and search
engine works for business analytics and enterprise search
applications. Previous experience with Elastic Stack is not
required, however knowledge of data warehousing and database
concepts will be helpful.
Master scala's advanced techniques to solve real-world problems in
data analysis and gain valuable insights from your data Key
Features A beginner's guide for performing data analysis loaded
with numerous rich, practical examples Access to popular Scala
libraries such as Breeze, Saddle for efficient data manipulation
and exploratory analysis Develop applications in Scala for
real-time analysis and machine learning in Apache Spark Book
DescriptionEfficient business decisions with an accurate sense of
business data helps in delivering better performance across
products and services. This book helps you to leverage the popular
Scala libraries and tools for performing core data analysis tasks
with ease. The book begins with a quick overview of the building
blocks of a standard data analysis process. You will learn to
perform basic tasks like Extraction, Staging, Validation, Cleaning,
and Shaping of datasets. You will later deep dive into the data
exploration and visualization areas of the data analysis life
cycle. You will make use of popular Scala libraries like Saddle,
Breeze, Vegas, and PredictionIO for processing your datasets. You
will learn statistical methods for deriving meaningful insights
from data. You will also learn to create applications for Apache
Spark 2.x on complex data analysis, in real-time. You will discover
traditional machine learning techniques for doing data analysis.
Furthermore, you will also be introduced to neural networks and
deep learning from a data analysis standpoint. By the end of this
book, you will be capable of handling large sets of structured and
unstructured data, perform exploratory analysis, and building
efficient Scala applications for discovering and delivering
insights What you will learn Techniques to determine the validity
and confidence level of data Apply quartiles and n-tiles to
datasets to see how data is distributed into many buckets Create
data pipelines that combine multiple data lifecycle steps Use
built-in features to gain a deeper understanding of the data Apply
Lasso regression analysis method to your data Compare Apache Spark
API with traditional Apache Spark data analysis Who this book is
forIf you are a data scientist or a data analyst who wants to learn
how to perform data analysis using Scala, this book is for you. All
you need is knowledge of the basic fundamentals of Scala
programming.
An expert guide to implementing fast, secure, and scalable
decentralized applications that work with thousands of users in
real time Key Features Implement advanced features of the Ethereum
network to build powerful decentralized applications Build smart
contracts on different domains using the programming techniques of
Solidity and Vyper Explore the architecture of Ethereum network to
understand advanced use cases of blockchain development Book
DescriptionEthereum is one of the commonly used platforms for
building blockchain applications. It's a decentralized platform for
applications that can run exactly as programmed without being
affected by fraud, censorship, or third-party interference. This
book will give you a deep understanding of how blockchain works so
that you can discover the entire ecosystem, core components, and
its implementations. You will get started by understanding how to
configure and work with various Ethereum protocols for developing
dApps. Next, you will learn to code and create powerful smart
contracts that scale with Solidity and Vyper. You will then explore
the building blocks of the dApps architecture, and gain insights on
how to create your own dApp through a variety of real-world
examples. The book will even guide you on how to deploy your dApps
on multiple Ethereum instances with the required best practices and
techniques. The next few chapters will delve into advanced topics
such as, building advanced smart contracts and multi-page frontends
using Ethereum blockchain. You will also focus on implementing
machine learning techniques to build decentralized autonomous
applications, in addition to covering several use cases across a
variety of domains such as, social media and e-commerce. By the end
of this book, you will have the expertise you need to build
decentralized autonomous applications confidently. What you will
learn Apply scalability solutions on dApps with Plasma and state
channels Understand the important metrics of blockchain for
analyzing and determining its state Develop a decentralized web
application using React.js and Node.js Create oracles with Node.js
to provide external data to smart contracts Get to grips with using
Etherscan and block explorers for various transactions Explore
web3.js, Solidity, and Vyper for dApps communication Deploy apps
with multiple Ethereum instances including TestRPC, private chain,
test chain, and mainnet Who this book is forThis book is for anyone
who wants to build fast, highly secure, and transactional
decentralized applications. If you are an Ethereum developer
looking to perfect your existing skills in building powerful
blockchain applications, then this book is for you. Basic knowledge
of Ethereum and blockchain is necessary to understand the concepts
covered in this book.
|
|