Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
|
Buy Now
Adversarial Machine Learning (Paperback)
Loot Price: R1,686
Discovery Miles 16 860
|
|
Adversarial Machine Learning (Paperback)
Series: Synthesis Lectures on Artificial Intelligence and Machine Learning
Expected to ship within 10 - 15 working days
|
The increasing abundance of large high-quality datasets, combined
with significant technical advances over the last several decades
have made machine learning into a major tool employed across a
broad array of tasks including vision, language, finance, and
security. However, success has been accompanied with important new
challenges: many applications of machine learning are adversarial
in nature. Some are adversarial because they are safety critical,
such as autonomous driving. An adversary in these applications can
be a malicious party aimed at causing congestion or accidents, or
may even model unusual situations that expose vulnerabilities in
the prediction engine. Other applications are adversarial because
their task and/or the data they use are. For example, an important
class of problems in security involves detection, such as malware,
spam, and intrusion detection. The use of machine learning for
detecting malicious entities creates an incentive among adversaries
to evade detection by changing their behavior or the content of
malicius objects they develop. The field of adversarial machine
learning has emerged to study vulnerabilities of machine learning
approaches in adversarial settings and to develop techniques to
make learning robust to adversarial manipulation. This book
provides a technical overview of this field. After reviewing
machine learning concepts and approaches, as well as common use
cases of these in adversarial settings, we present a general
categorization of attacks on machine learning. We then address two
major categories of attacks and associated defenses: decision-time
attacks, in which an adversary changes the nature of instances seen
by a learned model at the time of prediction in order to cause
errors, and poisoning or training time attacks, in which the actual
training dataset is maliciously modified. In our final chapter
devoted to technical content, we discuss recent techniques for
attacks on deep learning, as well as approaches for improving
robustness of deep neural networks. We conclude with a discussion
of several important issues in the area of adversarial learning
that in our view warrant further research. Given the increasing
interest in the area of adversarial machine learning, we hope this
book provides readers with the tools necessary to successfully
engage in research and practice of machine learning in adversarial
settings.
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!
|
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.