|
Showing 1 - 10 of
10 matches in All Departments
This book illustrates the rapid pace of development in intelligent
assistive technology in recent years, and highlights some salient
examples of using modern IT&C technologies to provide devices,
systems and application software for persons with certain motor or
cognitive disabilities. The book proposes both theoretical and
practical approaches to intelligent assistive and emergent
technologies used in healthcare for the elderly and patients with
chronic diseases. Intelligent assistive technology (IAT) is
currently being introduced and developed worldwide as an important
tool for maintaining independence and high quality of life among
community-living people with certain disabilities, and as a key
enabler for the aging population. The book offers a valuable
resource for students at technical, medical and general
universities, but also for specialists working in various fields in
which emergent technologies are being used to help people enjoy
optimal quality of life.
This book illustrates the rapid pace of development in intelligent
assistive technology in recent years, and highlights some salient
examples of using modern IT&C technologies to provide devices,
systems and application software for persons with certain motor or
cognitive disabilities. The book proposes both theoretical and
practical approaches to intelligent assistive and emergent
technologies used in healthcare for the elderly and patients with
chronic diseases. Intelligent assistive technology (IAT) is
currently being introduced and developed worldwide as an important
tool for maintaining independence and high quality of life among
community-living people with certain disabilities, and as a key
enabler for the aging population. The book offers a valuable
resource for students at technical, medical and general
universities, but also for specialists working in various fields in
which emergent technologies are being used to help people enjoy
optimal quality of life.
|
Affective Computing and Intelligent Interaction - Fourth International Conference, ACII 2011, Memphis,TN, USA, October 9-12, 2011; Proceedings, Part II (Paperback)
Sidney D'Mello, Arthur Graesser, Bjoern Schuller, Jean Claude Martin
|
R1,591
Discovery Miles 15 910
|
Ships in 10 - 15 working days
|
The two-volume set LNCS 6974 and LNCS 6975 constitutes the refereed
proceedings of the Fourth International Conference on Affective
Computing and Intelligent Interaction, ACII 2011, held in Memphis,
TN, USA, in October 2011. The 135 papers in this two volume set
presented together with 3 invited talks were carefully reviewed and
selected from 196 submissions. The papers are organized in topical
sections on recognition and synthesis of human affect,
affect-sensitive applications, methodological issues in affective
computing, affective and social robotics, affective and behavioral
interfaces, relevant insights from psychology, affective databases,
Evaluation and annotation tools
|
Affective Computing and Intelligent Interaction - Fourth International Conference, ACII 2011, Memphis, TN, USA, October 9-12, 2011, Proceedings, Part I (Paperback)
Sidney D'Mello, Arthur Graesser, Bjoern Schuller, Jean Claude Martin
|
R1,594
Discovery Miles 15 940
|
Ships in 10 - 15 working days
|
The two-volume set LNCS 6974 and LNCS 6975 constitutes the refereed
proceedings of the Fourth International Conference on Affective
Computing and Intelligent Interaction, ACII 2011, held in Memphis,
TN, USA, in October 2011. The 135 papers in this two volume set
presented together with 3 invited talks were carefully reviewed and
selected from 196 submissions. The papers are organized in topical
sections on recognition and synthesis of human affect,
affect-sensitive applications, methodological issues in affective
computing, affective and social robotics, affective and behavioral
interfaces, relevant insights from psychology, affective databases,
Evaluation and annotation tools
|
The Handbook of Multimodal-Multisensor Interfaces, Volume 3 - Language Processing, Software, Commercialization, and Emerging Directions (Paperback)
Sharon Oviatt, Bjoern Schuller, Philip Cohen, Daniel Sonntag, Gerasimos Potamianos, …
|
R3,173
Discovery Miles 31 730
|
Ships in 10 - 15 working days
|
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces-user input involving new media
(speech, multi-touch, hand and body gestures, facial expressions,
writing) embedded in multimodal-multisensor interfaces.This
three-volume handbook is written by international experts and
pioneers in the field. It provides a textbook, reference, and
technology roadmap for professionals working in this and related
areas. This third volume focuses on state-of-the-art multimodal
language and dialogue processing, including semantic integration of
modalities. The development of increasingly expressive embodied
agents and robots has become an active test bed for coordinating
multimodal dialogue input and output, including processing of
language and nonverbal communication. In addition, major
application areas are featured for commercializing
multimodal-multisensor systems, including automotive, robotic,
manufacturing, machine translation, banking, communications, and
others. These systems rely heavily on software tools, data
resources, and international standards to facilitate their
development. For insights into the future, emerging
multimodal-multisensor technology trends are highlighted in
medicine, robotics, interaction with smart spaces, and similar
areas. Finally, this volume discusses the societal impact of more
widespread adoption of these systems, such as privacy risks and how
to mitigate them. The handbook chapters provide a number of
walk-through examples of system design and processing, information
on practical resources for developing and evaluating new systems,
and terminology and tutorial support for mastering this emerging
field. In the final section of this volume, experts exchange views
on a timely and controversial challenge topic, and how they believe
multimodal-multisensor interfaces need to be equipped to most
effectively advance human performance during the next decade.
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces: user input involving new
media (speech, multi-touch, hand and body gestures, facial
expressions, writing) embedded in multimodal-multisensor interfaces
that often include biosignals. This edited collection is written by
international experts and pioneers in the field. It provides a
textbook, reference, and technology roadmap for professionals
working in this and related areas. This second volume of the
handbook begins with multimodal signal processing, architectures,
and machine learning. It includes recent deep learning approaches
for processing multisensorial and multimodal user data and
interaction, as well as context-sensitivity. A further highlight is
processing of information about users' states and traits, an
exciting emerging capability in next-generation user interfaces.
These chapters discuss real-time multimodal analysis of emotion and
social signals from various modalities, and perception of affective
expression by users. Further chapters discuss multimodal processing
of cognitive state using behavioral and physiological signals to
detect cognitive load, domain expertise, deception, and depression.
This collection of chapters provides walk-through examples of
system design and processing, information on tools and practical
resources for developing and evaluating new systems, and
terminology and tutorial support for mastering this rapidly
expanding field. In the final section of this volume, experts
exchange views on the timely and controversial challenge topic of
multimodal deep learning. The discussion focuses on how
multimodal-multisensor interfaces are most likely to advance human
performance during the next decade.
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces: user input involving new
media (speech, multi-touch, hand and body gestures, facial
expressions, writing) embedded in multimodal-multisensor interfaces
that often include biosignals. This edited collection is written by
international experts and pioneers in the field. It provides a
textbook, reference, and technology roadmap for professionals
working in this and related areas. This second volume of the
handbook begins with multimodal signal processing, architectures,
and machine learning. It includes recent deep learning approaches
for processing multisensorial and multimodal user data and
interaction, as well as context-sensitivity. A further highlight is
processing of information about users' states and traits, an
exciting emerging capability in next-generation user interfaces.
These chapters discuss real-time multimodal analysis of emotion and
social signals from various modalities, and perception of affective
expression by users. Further chapters discuss multimodal processing
of cognitive state using behavioral and physiological signals to
detect cognitive load, domain expertise, deception, and depression.
This collection of chapters provides walk-through examples of
system design and processing, information on tools and practical
resources for developing and evaluating new systems, and
terminology and tutorial support for mastering this rapidly
expanding field. In the final section of this volume, experts
exchange views on the timely and controversial challenge topic of
multimodal deep learning. The discussion focuses on how
multimodal-multisensor interfaces are most likely to advance human
performance during the next decade.
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces- user input involving new
media (speech, multi-touch, gestures, writing) embedded in
multimodal-multisensor interfaces. These interfaces support smart
phones, wearables, in-vehicle and robotic applications, and many
other areas that are now highly competitive commercially. This
edited collection is written by international experts and pioneers
in the field. It provides a textbook, reference, and technology
roadmap for professionals working in this and related areas. This
first volume of the handbook presents relevant theory and
neuroscience foundations for guiding the development of
high-performance systems. Additional chapters discuss approaches to
user modeling and interface designs that support user choice, that
synergistically combine modalities with sensors, and that blend
multimodal input and output. This volume also highlights an
in-depth look at the most common multimodal-multisensor
combinations-for example, touch and pen input, haptic and
non-speech audio output, and speech-centric systems that co-process
either gestures, pen input, gaze, or visible lip movements. A
common theme throughout these chapters is supporting mobility and
individual differences among users. These handbook chapters provide
walk-through examples of system design and processing, information
on tools and practical resources for developing and evaluating new
systems, and terminology and tutorial support for mastering this
emerging field. In the final section of this volume, experts
exchange views on a timely and controversial challenge topic, and
how they believe multimodal-multisensor interfaces should be
designed in the future to most effectively advance human
performance.
The Handbook of Multimodal-Multisensor Interfaces provides the
first authoritative resource on what has become the dominant
paradigm for new computer interfaces- user input involving new
media (speech, multi-touch, gestures, writing) embedded in
multimodal-multisensor interfaces. These interfaces support smart
phones, wearables, in-vehicle and robotic applications, and many
other areas that are now highly competitive commercially. This
edited collection is written by international experts and pioneers
in the field. It provides a textbook, reference, and technology
roadmap for professionals working in this and related areas. This
first volume of the handbook presents relevant theory and
neuroscience foundations for guiding the development of
high-performance systems. Additional chapters discuss approaches to
user modeling and interface designs that support user choice, that
synergistically combine modalities with sensors, and that blend
multimodal input and output. This volume also highlights an
in-depth look at the most common multimodal-multisensor
combinations-for example, touch and pen input, haptic and
non-speech audio output, and speech-centric systems that co-process
either gestures, pen input, gaze, or visible lip movements. A
common theme throughout these chapters is supporting mobility and
individual differences among users. These handbook chapters provide
walk-through examples of system design and processing, information
on tools and practical resources for developing and evaluating new
systems, and terminology and tutorial support for mastering this
emerging field. In the final section of this volume, experts
exchange views on a timely and controversial challenge topic, and
how they believe multimodal-multisensor interfaces should be
designed in the future to most effectively advance human
performance.
|
|