|
Showing 1 - 8 of
8 matches in All Departments
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces: user input involving new
media (speech, multi-touch, hand and body gestures, facial
expressions, writing) embedded in multimodal-multisensor interfaces
that often include biosignals. This edited collection is written by
international experts and pioneers in the field. It provides a
textbook, reference, and technology roadmap for professionals
working in this and related areas. This second volume of the
handbook begins with multimodal signal processing, architectures,
and machine learning. It includes recent deep learning approaches
for processing multisensorial and multimodal user data and
interaction, as well as context-sensitivity. A further highlight is
processing of information about users' states and traits, an
exciting emerging capability in next-generation user interfaces.
These chapters discuss real-time multimodal analysis of emotion and
social signals from various modalities, and perception of affective
expression by users. Further chapters discuss multimodal processing
of cognitive state using behavioral and physiological signals to
detect cognitive load, domain expertise, deception, and depression.
This collection of chapters provides walk-through examples of
system design and processing, information on tools and practical
resources for developing and evaluating new systems, and
terminology and tutorial support for mastering this rapidly
expanding field. In the final section of this volume, experts
exchange views on the timely and controversial challenge topic of
multimodal deep learning. The discussion focuses on how
multimodal-multisensor interfaces are most likely to advance human
performance during the next decade.
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces- user input involving new
media (speech, multi-touch, gestures, writing) embedded in
multimodal-multisensor interfaces. These interfaces support smart
phones, wearables, in-vehicle and robotic applications, and many
other areas that are now highly competitive commercially. This
edited collection is written by international experts and pioneers
in the field. It provides a textbook, reference, and technology
roadmap for professionals working in this and related areas. This
first volume of the handbook presents relevant theory and
neuroscience foundations for guiding the development of
high-performance systems. Additional chapters discuss approaches to
user modeling and interface designs that support user choice, that
synergistically combine modalities with sensors, and that blend
multimodal input and output. This volume also highlights an
in-depth look at the most common multimodal-multisensor
combinations-for example, touch and pen input, haptic and
non-speech audio output, and speech-centric systems that co-process
either gestures, pen input, gaze, or visible lip movements. A
common theme throughout these chapters is supporting mobility and
individual differences among users. These handbook chapters provide
walk-through examples of system design and processing, information
on tools and practical resources for developing and evaluating new
systems, and terminology and tutorial support for mastering this
emerging field. In the final section of this volume, experts
exchange views on a timely and controversial challenge topic, and
how they believe multimodal-multisensor interfaces should be
designed in the future to most effectively advance human
performance.
|
The Handbook of Multimodal-Multisensor Interfaces, Volume 3 - Language Processing, Software, Commercialization, and Emerging Directions (Paperback)
Sharon Oviatt, Bjoern Schuller, Philip Cohen, Daniel Sonntag, Gerasimos Potamianos, …
|
R3,173
Discovery Miles 31 730
|
Ships in 10 - 15 working days
|
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces-user input involving new media
(speech, multi-touch, hand and body gestures, facial expressions,
writing) embedded in multimodal-multisensor interfaces.This
three-volume handbook is written by international experts and
pioneers in the field. It provides a textbook, reference, and
technology roadmap for professionals working in this and related
areas. This third volume focuses on state-of-the-art multimodal
language and dialogue processing, including semantic integration of
modalities. The development of increasingly expressive embodied
agents and robots has become an active test bed for coordinating
multimodal dialogue input and output, including processing of
language and nonverbal communication. In addition, major
application areas are featured for commercializing
multimodal-multisensor systems, including automotive, robotic,
manufacturing, machine translation, banking, communications, and
others. These systems rely heavily on software tools, data
resources, and international standards to facilitate their
development. For insights into the future, emerging
multimodal-multisensor technology trends are highlighted in
medicine, robotics, interaction with smart spaces, and similar
areas. Finally, this volume discusses the societal impact of more
widespread adoption of these systems, such as privacy risks and how
to mitigate them. The handbook chapters provide a number of
walk-through examples of system design and processing, information
on practical resources for developing and evaluating new systems,
and terminology and tutorial support for mastering this emerging
field. In the final section of this volume, experts exchange views
on a timely and controversial challenge topic, and how they believe
multimodal-multisensor interfaces need to be equipped to most
effectively advance human performance during the next decade.
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces: user input involving new
media (speech, multi-touch, hand and body gestures, facial
expressions, writing) embedded in multimodal-multisensor interfaces
that often include biosignals. This edited collection is written by
international experts and pioneers in the field. It provides a
textbook, reference, and technology roadmap for professionals
working in this and related areas. This second volume of the
handbook begins with multimodal signal processing, architectures,
and machine learning. It includes recent deep learning approaches
for processing multisensorial and multimodal user data and
interaction, as well as context-sensitivity. A further highlight is
processing of information about users' states and traits, an
exciting emerging capability in next-generation user interfaces.
These chapters discuss real-time multimodal analysis of emotion and
social signals from various modalities, and perception of affective
expression by users. Further chapters discuss multimodal processing
of cognitive state using behavioral and physiological signals to
detect cognitive load, domain expertise, deception, and depression.
This collection of chapters provides walk-through examples of
system design and processing, information on tools and practical
resources for developing and evaluating new systems, and
terminology and tutorial support for mastering this rapidly
expanding field. In the final section of this volume, experts
exchange views on the timely and controversial challenge topic of
multimodal deep learning. The discussion focuses on how
multimodal-multisensor interfaces are most likely to advance human
performance during the next decade.
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces- user input involving new
media (speech, multi-touch, gestures, writing) embedded in
multimodal-multisensor interfaces. These interfaces support smart
phones, wearables, in-vehicle and robotic applications, and many
other areas that are now highly competitive commercially. This
edited collection is written by international experts and pioneers
in the field. It provides a textbook, reference, and technology
roadmap for professionals working in this and related areas. This
first volume of the handbook presents relevant theory and
neuroscience foundations for guiding the development of
high-performance systems. Additional chapters discuss approaches to
user modeling and interface designs that support user choice, that
synergistically combine modalities with sensors, and that blend
multimodal input and output. This volume also highlights an
in-depth look at the most common multimodal-multisensor
combinations-for example, touch and pen input, haptic and
non-speech audio output, and speech-centric systems that co-process
either gestures, pen input, gaze, or visible lip movements. A
common theme throughout these chapters is supporting mobility and
individual differences among users. These handbook chapters provide
walk-through examples of system design and processing, information
on tools and practical resources for developing and evaluating new
systems, and terminology and tutorial support for mastering this
emerging field. In the final section of this volume, experts
exchange views on a timely and controversial challenge topic, and
how they believe multimodal-multisensor interfaces should be
designed in the future to most effectively advance human
performance.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R383
R310
Discovery Miles 3 100
|