Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 14 of 14 matches in All Departments
The ongoing migration of computing and information access from stationary environments to mobile computing devices for eventual use in mobile environments, such as Personal Digital Assistants (PDAs), tablet PCs, next generation mobile phones, and in-car driver assistance systems, poses critical challenges for natural human-computer interaction. Spoken dialogue is a key factor in ensuring natural and user-friendly interaction with such devices which are meant not only for computer specialists, but also for everyday users. Speech supports hands-free and eyes-free operation, and becomes a key alternative interaction mode in mobile environments, e.g. in cars where driver distraction by manually operated devices may be a significant problem. On the other hand, the use of mobile devices in public places, may make the possibility of using alternative modalities possibly in combination with speech, such as graphics output and gesture input, preferable due to e.g. privacy issues. Researchers' interest is progressively turning to the integration of speech with other modalities such as gesture input and graphics output, partly to accommodate more efficient interaction and partly to accommodate different user preferences. This book: Audience: Computer scientists, engineers, and others who work in
the area of spoken multimodal dialogue systems in academia and in
the industry;
This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR - "Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication," 2003- 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious - curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human-computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.
The main topic of this volume is natural multimodal interaction. The book isunique in that it brings together a great many contributions regarding aspectsof natural and multimodal interaction written by many of the important actorsin the field. It is a timely update of Multimodality in Language and SpeechSystems by Bjorn Granstrom, David House and Inger Karlsson and, at the sametime, it presents a much broader overview of the field. Its 17 chaptersprovide a broad and detailed impression of where the fairly new field ofnatural and multimodal interactivity engineering stands today. Topicsaddressed include talking heads, conversational agents, tutoring systems, multimodal communication, machine learning, architectures for multimodaldialogue systems, systems evaluation, and data annotation."
This book is a collection of eleven chapters which together represent an original contribution to the field of (multimodal) spoken dialogue systems. The chapters include highly relevant topics, such as dialogue modeling in research systems versus industrial systems, evaluation, miscommunication and error handling, grounding, statistical and corpus-based approaches to discourse and dialogue modeling, data analysis, and corpus annotation and annotation tools. The book contains several detailed application studies, including, e.g., speech-controlled MP3 players in a car environment, negotiation training with a virtual human in a military context, application of spoken dialogue to question-answering systems, and cognitive aspects in tutoring systems. The chapters vary considerably with respect to the level of expertise required in advance to benefit from them. However, most chapters start with a state-of-the-art description from which all readers from the spoken dialogue community may benefit. Overview chapters and state-of-the-art descriptions may also be of interest to people from the human-computer interaction community.
In its nine chapters, this book provides an overview of the state-of-the-art and best practice in several sub-fields of evaluation of text and speech systems and components. The evaluation aspects covered include speech and speaker recognition, speech synthesis, animated talking agents, part-of-speech tagging, parsing, and natural language software like machine translation, information retrieval, question answering, spoken dialogue systems, data resources, and annotation schemes. With its broad coverage and original contributions this book is unique in the field of evaluation of speech and language technology. This book is of particular relevance to advanced undergraduate students, PhD students, academic and industrial researchers, and practitioners.
This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR - "Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication," 2003- 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious - curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human-computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.
The eleven chapters of this book represent an original contribution to the field of multimodal spoken dialogue systems. The material includes highly relevant topics, such as dialogue modeling in research systems versus industrial systems. The book contains detailed application studies, including speech-controlled MP3 players in a car environment, negotiation training with a virtual human in a military context and the application of spoken dialogue to question-answering systems.
The IEEE Tutorialand ResearchWorkshopon Perceptionand InteractiveTe- nologies for Multimodal Dialogue Systems (PIT 2008) is the continuation of a successful series of workshops that started with an ISCA Tutorial and Research WorkshoponMultimodalDialogueSystemsin1999.Thisworkshopwasfollowed by a second one focusing on mobile dialogue systems (IDS 2002), a third one exploring the role of a?ect in dialogue (ADS 2004), and a fourth one focusing on perceptive interfaces (PIT 2006). Like its predecessors, PIT 2008 took place at Kloster Irsee in Bavaria. Due to the increasing interest in perceptive interfaces, we decided to hold a follow-up workshop on the themes discussed at PIT 2006, but encouraged aboveallpaperswithafocusonperceptioninmultimodaldialoguesystems.PIT 2008received37 paperscoveringthe following topics (1) multimodal and spoken dialogue systems, (2) classi?cation of dialogue acts and sound, (3) recognitionof eye gaze, head poses, mimics and speech aswellascombinationsofmodalities, (4) vocal emotion recognition, (5) human-like and social dialogue systems and (6) evaluation methods for multimodal dialogue systems. Noteworthy was the strong participation from industry at PIT 2008. Indeed, 17 of the accepted 37 papers come from industrial organizations or were written in collaboration with them. Wewouldliketothankallauthorsforthe e?ortthey madewiththeirsubm- sions, and the Program Committee - nearly 50 distinguished researchers from industry and academia - who worked very hard to meet tight deadlines and selected the best contributions for the ?nal program. Special thanks goes to our invited speaker, Anton Batliner from Friedrich-Alexander-Universit] atErlangen- N] urnberg."
In its nine chapters, this book provides an overview of the state-of-the-art and best practice in several sub-fields of evaluation of text and speech systems and components. The evaluation aspects covered include speech and speaker recognition, speech synthesis, animated talking agents, part-of-speech tagging, parsing, and natural language software like machine translation, information retrieval, question answering, spoken dialogue systems, data resources, and annotation schemes. With its broad coverage and original contributions this book is unique in the field of evaluation of speech and language technology. This book is of particular relevance to advanced undergraduate students, PhD students, academic and industrial researchers, and practitioners.
This book constitutes the refereed proceedings of the International Tutorial and Research Workshop on Perception and Interactive Technologies, PIT 2006, held at Kloster Irsee, Germany, June 2006. The book presents 16 revised full papers together with 4 revised poster papers and 6 system demonstration papers, organized in topical sections on head pose and eye gaze tracking, modeling and simulation of perception, integrating information from multiple channels, and more.
The ongoing migration of computing and information access from stationary environments to mobile computing devices for eventual use in mobile environments, such as Personal Digital Assistants (PDAs), tablet PCs, next generation mobile phones, and in-car driver assistance systems, poses critical challenges for natural human-computer interaction. Spoken dialogue is a key factor in ensuring natural and user-friendly interaction with such devices which are meant not only for computer specialists, but also for everyday users. Speech supports hands-free and eyes-free operation, and becomes a key alternative interaction mode in mobile environments, e.g. in cars where driver distraction by manually operated devices may be a significant problem. On the other hand, the use of mobile devices in public places, may make the possibility of using alternative modalities possibly in combination with speech, such as graphics output and gesture input, preferable due to e.g. privacy issues. Researchersa (TM) interest is progressively turning to the integration of speech with other modalities such as gesture input and graphics output, partly to accommodate more efficient interaction and partly to accommodate different user preferences. This book: combines overview chapters of key areas in spoken multimodal dialogue (systems and components, architectures, and evaluation) with chapters focussed on particular applications or problems in the field. focusses on the influence of the environment when building and evaluating an application. Audience: Computer scientists, engineers, and others who work in the area of spoken multimodal dialogue systems in academia and in theindustry. Graduate students and Ph.D. students specialising in spoken multimodal dialogue systems in general, or focusing on issues in these systems in mobile environments in particular
Human conversational partners are able, at least to a certain extent, to detect the speaker s or listener s emotional state and may attempt to respond to it accordingly. When instead one of the interlocutors is a computer a number of questions arise, such as the following: To what extent are dialogue systems able to simulate such behaviors? Can we learn the mechanisms of emotional be- viors from observing and analyzing the behavior of human speakers? How can emotionsbeautomaticallyrecognizedfromauser smimics, gesturesandspeech? What possibilities does a dialogue system have to express emotions itself? And, very importantly, would emotional system behavior be desirable at all? Given the state of ongoing research into incorporating emotions in dialogue systems we found it timely to organize a Tutorial and Research Workshop on A?ectiveDialogueSystems(ADS2004)atKlosterIrseein GermanyduringJune 14 16, 2004. After two successful ISCA Tutorial and Research Workshops on Multimodal Dialogue Systems at the same location in 1999 and 2002, we felt that a workshop focusing on the role of a?ect in dialogue would be a valuable continuation of the workshop series. Due to its interdisciplinary nature, the workshop attracted submissions from researchers with very di?erent backgrounds and from many di?erent research areas, working on, for example, dialogue processing, speech recognition, speech synthesis, embodied conversational agents, computer graphics, animation, user modelling, tutoring systems, cognitive systems, and human-computer inter- tion."
Designing Interactive Speech Systems describes the design and implementation of spoken language dialogue within the context of SLDS (spoken language dialogue systems) development. Using an applications-oriented SLDS developed through the Danish Dialogue project, the authors describe the complete process involved in designing such a system; and in doing so present several innovative practical tools, such as dialogue design guideline s, in-depth evaluation methodologies, and speech functionality analysis. The approach taken is firmly applications-oriented, describing the results of research applicable to industry and showing how the development of advanced applications drives research rather than the other way around. All those working on the research and development of spoken language services, especially in the area of telecommunications, will benefit from reading this book.
The main topic of this volume is natural multimodal interaction. The book is unique in that it brings together a great many contributions regarding aspects of natural and multimodal interaction written by many of the important actors in the field. It is a timely update of Multimodality in Language and Speech Systems by Bjorn Granstrom, David House and Inger Karlsson and, at the same time, it presents a much broader overview of the field. Its 17 chapters provide a broad and detailed impression of where the fairly new field of natural and multimodal interactivity engineering stands today. Topics addressed include talking heads, conversational agents, tutoring systems, multimodal communication, machine learning, architectures for multimodal dialogue systems, systems evaluation, and data annotation."
|
You may like...
|