As deep neural networks (DNNs) become increasingly common in
real-world applications, the potential to deliberately "fool" them
with data that wouldn’t trick a human presents a new attack
vector. This practical book examines real-world scenarios where
DNNs—the algorithms intrinsic to much of AI—are used daily to
process image, audio, and video data. Author Katy Warr considers
attack motivations, the risks posed by this adversarial input, and
methods for increasing AI robustness to these attacks. If you’re
a data scientist developing DNN algorithms, a security architect
interested in how to make AI systems more resilient to attack, or
someone fascinated by the differences between artificial and
biological perception, this book is for you. Delve into DNNs and
discover how they could be tricked by adversarial input Investigate
methods used to generate adversarial input capable of fooling DNNs
Explore real-world scenarios and model the adversarial threat
Evaluate neural network robustness; learn methods to increase
resilience of AI systems to adversarial data Examine some ways in
which AI might become better at mimicking human perception in years
to come
General
Imprint: |
O'Reilly Media
|
Country of origin: |
United States |
Release date: |
August 2019 |
Authors: |
Katy Warr
|
Dimensions: |
232 x 178 x 13mm (L x W x T) |
Format: |
Paperback
|
Pages: |
250 |
ISBN-13: |
978-1-4920-4495-6 |
Categories: |
Books
|
LSN: |
1-4920-4495-4 |
Barcode: |
9781492044956 |
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!