|
Showing 1 - 4 of
4 matches in All Departments
This book introduces multiagent planning under uncertainty as
formalized by decentralized partially observable Markov decision
processes (Dec-POMDPs). The intended audience is researchers and
graduate students working in the fields of artificial intelligence
related to sequential decision making: reinforcement learning,
decision-theoretic planning for single agents, classical multiagent
planning, decentralized control, and operations research.
|
Distributed Artificial Intelligence - Third International Conference, DAI 2021, Shanghai, China, December 17-18, 2021, Proceedings (Paperback, 1st ed. 2022)
Jie Chen, Jerome Lang, Christopher Amato, Dengji Zhao
|
R1,906
Discovery Miles 19 060
|
Ships in 10 - 15 working days
|
This book constitutes the refereed proceedings of the Third
International Conference on Distributed Artificial Intelligence,
DAI 2021, held in Shanghai, China, in December 2021.The 15 full
papers presented in this book were carefully reviewed and selected
from 31 submissions. DAI aims at bringing together international
researchers and practitioners in related areas including general
AI, multiagent systems, distributed learning, computational game
theory, etc., to provide a single, high-profile, internationally
renowned forum for research in the theory and practice of
distributed AI.
An introduction to decision making under uncertainty from a
computational perspective, covering both theory and applications
ranging from speech recognition to airborne collision avoidance.
Many important problems involve decision making under
uncertainty-that is, choosing actions based on often imperfect
observations, with unknown outcomes. Designers of automated
decision support systems must take into account the various sources
of uncertainty while balancing the multiple objectives of the
system. This book provides an introduction to the challenges of
decision making under uncertainty from a computational perspective.
It presents both the theory behind decision making models and
algorithms and a collection of example applications that range from
speech recognition to aircraft collision avoidance. Focusing on two
methods for designing decision agents, planning and reinforcement
learning, the book covers probabilistic models, introducing
Bayesian networks as a graphical model that captures probabilistic
relationships between variables; utility theory as a framework for
understanding optimal decision making under uncertainty; Markov
decision processes as a method for modeling sequential problems;
model uncertainty; state uncertainty; and cooperative decision
making involving multiple interacting agents. A series of
applications shows how the theoretical concepts can be applied to
systems for attribute-based person search, speech applications,
collision avoidance, and unmanned aircraft persistent surveillance.
Decision Making Under Uncertainty unifies research from different
communities using consistent notation, and is accessible to
students and researchers across engineering disciplines who have
some prior exposure to probability theory and calculus. It can be
used as a text for advanced undergraduate and graduate students in
fields including computer science, aerospace and electrical
engineering, and management science. It will also be a valuable
professional reference for researchers in a variety of disciplines.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
Tenet
John David Washington, Robert Pattinson, …
DVD
(1)
R51
Discovery Miles 510
|