|
|
Showing 1 - 2 of
2 matches in All Departments
|
xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (Paperback, 1st ed. 2022)
Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Muller, …
|
R1,325
Discovery Miles 13 250
|
Ships in 18 - 22 working days
|
This is an open access book.Statistical machine learning (ML) has
triggered a renaissance of artificial intelligence (AI). While the
most successful ML models, including Deep Neural Networks (DNN),
have developed better predictivity, they have become increasingly
complex, at the expense of human interpretability (correlation vs.
causality). The field of explainable AI (xAI) has emerged with the
goal of creating tools and models that are both predictive and
interpretable and understandable for humans. Explainable AI is
receiving huge interest in the machine learning and AI research
communities, across academia, industry, and government, and there
is now an excellent opportunity to push towards successful
explainable AI applications. This volume will help the research
community to accelerate this process, to promote a more systematic
use of explainable AI to improve models in diverse applications,
and ultimately to better understand how current explainable AI
methods need to be improved and what kind of theory of explainable
AI is needed. After overviews of current methods and challenges,
the editors include chapters that describe new developments in
explainable AI. The contributions are from leading researchers in
the field, drawn from both academia and industry, and many of the
chapters take a clear interdisciplinary approach to
problem-solving. The concepts discussed include explainability,
causability, and AI interfaces with humans, and the applications
include image processing, natural language, law, fairness, and
climate science.
The development of "intelligent" systems that can take decisions
and perform autonomously might lead to faster and more consistent
decisions. A limiting factor for a broader adoption of AI
technology is the inherent risks that come with giving up human
control and oversight to "intelligent" machines. For sensitive
tasks involving critical infrastructures and affecting human
well-being or health, it is crucial to limit the possibility of
improper, non-robust and unsafe decisions and actions. Before
deploying an AI system, we see a strong need to validate its
behavior, and thus establish guarantees that it will continue to
perform as expected when deployed in a real-world environment. In
pursuit of that objective, ways for humans to verify the agreement
between the AI decision structure and their own ground-truth
knowledge have been explored. Explainable AI (XAI) has developed as
a subfield of AI, focused on exposing complex AI models to humans
in a systematic and interpretable manner. The 22 chapters included
in this book provide a timely snapshot of algorithms, theory, and
applications of interpretable and explainable AI and AI techniques
that have been proposed recently reflecting the current discourse
in this field and providing directions of future development. The
book is organized in six parts: towards AI transparency; methods
for interpreting AI systems; explaining the decisions of AI
systems; evaluating interpretability and explanations; applications
of explainable AI; and software for explainable AI.
|
You may like...
Ab Wheel
R209
R149
Discovery Miles 1 490
Loot
Nadine Gordimer
Paperback
(2)
R367
R340
Discovery Miles 3 400
Fast X
Vin Diesel
Blu-ray disc
R488
Discovery Miles 4 880
|