![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Machine Conversationsis a collection of some of the best research available in the practical arts of machine conversation. The book describes various attempts to create practical and flexible machine conversation - ways of talking to computers in an unrestricted version of English or some other language. While this book employs and advances the theory of dialogue and its linguistic underpinnings, the emphasis is on practice, both in university research laboratories and in company research and development. Since the focus is on the task and on the performance, this book provides some of the first-rate work taking place in industry, quite apart from the academic tradition. It also reveals striking and relevant facts about the tone of machine conversations and closely evaluates what users require. Machine Conversations is an excellent reference for researchers interested in computational linguistics, cognitive science, natural language processing, artificial intelligence, human computer interfaces and machine learning.
People engage in discourse every day - from writing letters and presenting papers to simple discussions. Yet discourse is a complex and fascinating phenomenon that is not well understood. This volume stems from a multidisciplinary workshop in which eminent scholars in linguistics, sociology and computational linguistics presented various aspects of discourse. The topics treated range from multi-party conversational interactions to deconstructing text from various perspectives, considering topic-focus development and discourse structure, and an empirical study of discourse segmentation. The chapters not only describe each author's favorite burning issue in discourse but also provide a fascinating view of the research methodology and style of argumentation in each field.
Reasoning for Information: Seeking and Planning Dialogues provides a logic-based reasoning component for spoken language dialogue systems. This component, called Problem Assistant is responsible for processing constraints on a possible solution obtained from various sources, namely user and the system's domain-specific information. The authors also present findings on the implementation of a dialogue management interface to the Problem Assistant. The dialogue system supports simple mixed-initiative planning interactions in the TRAINS domain, which is still a relatively complex domain involving a number of logical constraints and relations forming the basis for the collaborative problem-solving behavior that drives the dialogue.
Authors and Participants xi I Pragmatic Aspects 1 1. Some pragmatic decision criteria in generation 3 EduardH. Hovy 2. How to appear to be conforming to the 'maxims' even if you prefer to violate them 19 Antlwny Jameson 43 3. Contextual effects on responses to misconceptions Kathleen F. McCoy 4. Generating understandable explanatory sentences 55 Domenico Parisi & Donatella Ferrante 5. Toward a plan-based theory of referring actions 63 Douglas E. Appelt Generating referring expressions and pointing gestures 71 6. Norben Reithinger II Generation of Connected Discourse 83 7. Rhetorical Structure Theory: description and construction of text structures 85 William C. Mann & Sandra A. Tlwmpson 8. Discourse strategies for describing complex physical objects 97 Cecile L. Paris & Kathleen R. McKeown 9. Strategies for generating coherent descriptions of object movements in street scenes 117 Hans-Joachim Novak 133 10. The automated news agency: SEMTEX - a text generator for German Dietmar ROsner 149 11. A connectionist approach to the generation of abstracts KOiti Hasida, Shun Ishizald & Hitoshi Isahara III Generator Design 157 159 12. Factors contributing to efficiency in natural language generation DavidD. McDonald, Marie M. Vaughan & James D. Pustejovsky 183 13. Reviewing as a component of the text generation process Masoud Yazdani A French and English syntactic component for generation 191 14. Laurence Danlos KING: a knowledge-intensive natural language generator 219 15. Paul S. Jacobs vii 231 IV Grammars and Grammatical Formalisms 233 16. The relevance of Tree Adjoining Grammar to generation Aravind K.
This Pivot reconsiders the controversial literary figure of Lin Shu and the debate surrounding his place in the history of Modern Chinese Literature. Although recent Chinese mainland research has recognized some of the innovations introduced by Lin Shu, he has often been labeled a 'rightist reformer' in contrast to 'leftist reformers' such as Chen Duxiu and the new wave scholars of the May Fourth Movement. This book provides a well-documented account of his place in the different polemics between these two circles ('conservatives' and 'reformers') and provides a more nuanced account of the different literary movements of the time. Notably, it argues that these differences were neither in content nor in politics, but in the methodological approach of both parties. Examining Lin Shu and the 'conservatives' advocated coexistence of both traditional and modern thought, the book provides background to the major changes occurring in the intellectual landscape of Modern China.
In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are mainly evaluated within the phoneme recognition task under the Hybrid Hidden Markov Model/Artificial Neural Network (HMM/ANN) paradigm. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron (MLP). Additionally, the output of the first level is used as an input for the second level. This system can be substantially speeded up by removing the redundant information contained at the output of the first level.
Natural language dialogue is a continuous, unified phenomenon. Speakers use their conversational context to simplify individual utterances through a number of linguistic devices, including ellipsis and definite references. Yet most computational systems for using natural language treat individual utterances as separate entities, and have distinctly separate processes for handling ellipsis, definite references, and other dialogue phenomena. This book, a slightly revised version of the Ph. D. dissertation that I completed in December 1986, describes a different approach. It presents a computational system, Psli3, that uses the uniform framework of a production system architecture to carry out natural language understanding and generation in a well-integrated way. This is demonstrated primarily through intersentential ellipsis resolution, in addition to examples of definite reference resolution and interactive error correction. The system's conversational context arises naturally as the result of the persistence of the internal representations of previous utterances in working memory. Natural language input is interpreted within this framework using a modification of the syntactic technique of chart parsing, extended to include semantics, and adapted to the production system architecture. This technique, called semantic chart parsing, provides a graceful way of handling ambiguity within this architecture, and allows separate knowledge sources to interact smoothly across different utterances in a highly integrated fashion. xvi Integrated Natural Language Dialogue The design of this system demonstrates how flexible and natural user interactions can be carried out using a system with a naturally flexible control structure.
Speech and Human-Machine Dialog focuses on the dialog management component of a spoken language dialog system. Spoken language dialog systems provide a natural interface between humans and computers. These systems are of special interest for interactive applications, and they integrate several technologies including speech recognition, natural language understanding, dialog management and speech synthesis. Due to the conjunction of several factors throughout the past few years, humans are significantly changing their behavior vis-a-vis machines. In particular, the use of speech technologies will become normal in the professional domain, and in everyday life. The performance of speech recognition components has also significantly improved. This book includes various examples that illustrate the different functionalities of the dialog model in a representative application for train travel information retrieval (train time tables, prices and ticket reservation). Speech and Human-Machine Dialog is designed for a professional audience, composed of researchers and practitioners in industry. This book is also suitable as a secondary text for graduate-level students in computer science and engineering. "
Although there has been much progress in developing theories, models and systems in the areas of Natural Language Processing (NLP) and Vision Processing (VP), there has heretofore been little progress on integrating these two subareas of Artificial Intelligence (AI). This book contains a set of edited papers addressing theoretical issues and the grounding of representations in NLP and VP from philosophical and psychological points of view. The papers focus on site descriptions such as the reasoning work on space at Leeds, UK, the systems work of the ILS (Illinois, U.S.A.) and philosophical work on grounding at Torino, Italy, on Schank's earlier work on pragmatics and meaning incorporated into hypermedia teaching systems, Wilks' visions on metaphor, on experimental data for how people fuse language and vision and theories and computational models, mainly connectionist, for tackling Searle's Chinese Room Problem and Harnad's Symbol Grounding Problem. The Irish Room is introduced as a mechanism through which integration solves the Chinese Room. The U.S.A., China and the EU are well reflected, showing the fact that integration is a truly international issue. There is no doubt that all of this will be necessary for the SuperInformationHighways of the future.
Most of the books about computational (lexical) semantic lexicons deal with the depth (or content) aspect of lexicons, ignoring the breadth (or coverage) aspect. This book presents a first attempt in the community to address both issues: content and coverage of computational semantic lexicons, in a thorough manner. Moreover, it addresses issues which have not yet been tackled in implemented systems such as the application time of lexical rules. Lexical rules and lexical underspecification are also contrasted in implemented systems. The main approaches in the field of computational (lexical) semantics are represented in the present book (including Wordnet, CyC, Mikrokosmos, Generative Lexicon). This book embraces several fields (and subfields) as different as: linguistics (theoretical, computational, semantics, pragmatics), psycholinguistics, cognitive science, computer science, artificial intelligence, knowledge representation, statistics and natural language processing. The book also constitutes a very good introduction to the state of the art in computational semantic lexicons of the late 1990s.
This book is the result of a group of researchers from different disciplines asking themselves one question: what does it take to develop a computer interface that listens, talks, and can answer questions in a domain? First, obviously, it takes specialized modules for speech recognition and synthesis, human interaction management (dialogue, input fusion, and multimodal output fusion), basic question understanding, and answer finding. While all modules are researched as independent subfields, this book describes the development of state-of-the-art modules and their integration into a single, working application capable of answering medical (encyclopedic) questions such as "How long is a person with measles contagious?" or "How can I prevent RSI?." The contributions in this book, which grew out of the IMIX project funded by the Netherlands Organisation for Scientific Research, document the development of this system, but also address more general issues in natural language processing, such as the development of multidimensional dialogue systems, the acquisition of taxonomic knowledge from text, answer fusion, sequence processing for domain-specific entity recognition, and syntactic parsing for question answering. Together, they offer an overview of the most important findings and lessons learned in the scope of the IMIX project, making the book of interest to both academic and commercial developers of human-machine interaction systems in Dutch or any other language. Highlights include: integrating multi-modal input fusion in dialogue management (Van Schooten and Op den Akker), state-of-the-art approaches to the extraction of term variants (Van der Plas, Tiedemann, and Fahmi; Tjong Kim Sang, Hofmann, and De Rijke), and multi-modal answer fusion (two chapters by Van Hooijdonk, Bosma, Krahmer, Maes, Theune, and Marsi). Watch the IMIX movie at www.nwo.nl/imix-film. Like IBM's Watson, the IMIX system described in the book gives naturally phrased responses to naturally posed questions. Where Watson can only generate synthetic speech, the IMIX system also recognizes speech. On the other hand, Watson is able to win a television quiz, while the IMIX system is domain-specific, answering only to medical questions. "The Netherlands has always been one of the leaders in the general field of Human Language Technology, and IMIX is no exception. It was a very ambitious program, with a remarkably successful performance leading to interesting results. The teams covered a remarkable amount of territory in the general sphere of multimodal question answering and information delivery, question answering, information extraction and component technologies." Eduard Hovy, USC, USA, Jon Oberlander, University of Edinburgh, Scotland, and Norbert Reithinger, DFKI, Germany"
This book provides a novel method for topic detection and classification in social networks. The book addresses several research and technical challenges that are currently being investigated by the research community, from the analysis of relations and communications between members of a community, to quality, authority, relevance and timeliness of the content, traffic prediction based on media consumption, spam detection, to security, privacy and protection of personal information. Furthermore, the book discusses innovative techniques to address those challenges and provides novel solutions based on information theory, sequence analysis and combinatorics, which are applied on real data obtained from Twitter.
This book introduces a novel approach for intelligent visualizations that adapts the different visual variables and data processing to human's behavior and given tasks. Thereby a number of new algorithms and methods are introduced to satisfy the human need of information and knowledge and enable a usable and attractive way of information acquisition. Each method and algorithm is illustrated in a replicable way to enable the reproduction of the entire "SemaVis" system or parts of it. The introduced evaluation is scientifically well-designed and performed with more than enough participants to validate the benefits of the methods. Beside the introduced new approaches and algorithms, readers may find a sophisticated literature review in Information Visualization and Visual Analytics, Semantics and information extraction, and intelligent and adaptive systems. This book is based on an awarded and distinguished doctoral thesis in computer science.
With the advent and increasing popularity of Computer Supported Collaborative Learning (CSCL) and e-learning technologies, the need of "automatic assessment "and" "of" teacher/tutor support" for the two tightly intertwined activities of "comprehension" of reading materials and of "collaboration" among peers has grown significantly. In this context, a polyphonic model of discourse derived from Bakhtin s work as a paradigm is used for analyzing both general texts and CSCL conversations in a unique framework focused on different facets of textual cohesion. As specificity of our analysis, the "individual learning" perspective is focused on the identification of reading strategies and on providing a multi-dimensional textual complexity model, whereas the "collaborative learning" dimension is centered on the evaluation of participants involvement, as well as on collaboration assessment. Our approach based on advanced Natural Language Processing techniques provides a qualitative estimation of the learning process and enhances understanding as a mediator of learning by providing automated feedback to both learners and teachers or tutors. The main benefits are its flexibility, extensibility and nevertheless specificity for covering multiple stages, starting from reading classroom materials, to discussing on specific topics in a collaborative manner and finishing the feedback loop by verbalizing metacognitive thoughts."
Accompanying continued industrial production and sales of artificial intelligence and expert systems is the risk that difficult and resistant theoretical problems and issues will be ignored. The participants at the Third Tinlap Workshop, whose contributions are contained in Theoretical Issues in Natural Language Processing, remove that risk. They discuss and promote theoretical research on natural language processing, examinations of solutions to current problems, development of new theories, and representations of published literature on the subject. Discussions among these theoreticians in artificial intelligence, logic, psychology, philosophy, and linguistics draw a comprehensive, up-to-date picture of the natural language processing field.
Ever since Chomsky laid the framework for a mathematically formal theory of syntax, two classes of formal models have held wide appeal. The finite state model offered simplicity. At the opposite extreme numerous very powerful models, most notable transformational grammar, offered generality. As soon as this mathematical framework was laid, devastating arguments were given by Chomsky and others indicating that the finite state model was woefully inadequate for the syntax of natural language. In response, the completely general transformational grammar model was advanced as a suitable vehicle for capturing the description of natural language syntax. While transformational grammar seems likely to be adequate to the task, many researchers have advanced the argument that it is "too adequate. " A now classic result of Peters and Ritchie shows that the model of transformational grammar given in Chomsky's Aspects IJ is powerful indeed. So powerful as to allow it to describe any recursively enumerable set. In other words it can describe the syntax of any language that is describable by any algorithmic process whatsoever. This situation led many researchers to reasses the claim that natural languages are included in the class of transformational grammar languages. The conclu sion that many reached is that the claim is void of content, since, in their view, it says little more than that natural language syntax is doable algo rithmically and, in the framework of modern linguistics, psychology or neuroscience, that is axiomatic."
This book assesses the place of logic, mathematics, and computer science in present day, interdisciplinary areas of computational linguistics. Computational linguistics studies natural language in its various manifestations from a computational point of view, both on the theoretical level (modeling grammar modules dealing with natural language form and meaning and the relation between these two) and on the practical level (developing applications for language and speech technology). It is a collection of chapters presenting new and future research. The book focuses mainly on logical approaches to computational processing of natural language and on the applicability of methods and techniques from the study of formal languages, programming, and other specification languages. It presents work from other approaches to linguistics, as well, especially because they inspire new work and approaches.
This book focuses on speech signal phenomena, presenting a robustification of the usual speech generation models with regard to the presumed types of excitation signals, which is equivalent to the introduction of a class of nonlinear models and the corresponding criterion functions for parameter estimation. Compared to the general class of nonlinear models, such as various neural networks, these models possess good properties of controlled complexity, the option of working in "online" mode, as well as a low information volume for efficient speech encoding and transmission. Providing comprehensive insights, the book is based on the authors' research, which has already been published, supplemented by additional texts discussing general considerations of speech modeling, linear predictive analysis and robust parameter estimation.
Transfer learning is one of the most important technologies in the era of artificial intelligence and deep learning. It seeks to leverage existing knowledge by transferring it to another, new domain. Over the years, a number of relevant topics have attracted the interest of the research and application community: transfer learning, pre-training and fine-tuning, domain adaptation, domain generalization, and meta-learning. This book offers a comprehensive tutorial on an overview of transfer learning, introducing new researchers in this area to both classic and more recent algorithms. Most importantly, it takes a "student's" perspective to introduce all the concepts, theories, algorithms, and applications, allowing readers to quickly and easily enter this area. Accompanying the book, detailed code implementations are provided to better illustrate the core ideas of several important algorithms, presenting good examples for practice.
This book contains a comprehensive treatment of advanced LaTeX features. The focus is on the development of high quality documents and presentations, by revealing powerful insights into the LaTeX language. The well-established advantages of the typesetting system LaTeX are the preparation and publication of platform-independent high-quality documents and automatic numbering and cross-referencing of illustrations or references. These can be extended beyond the typical applications, by creating highly dynamic electronic documents. This is commonly performed in connection with the portable document format (PDF), as well as other programming tools which allow the development of extremely flexible electronic documents.
A reconsideration of the semantics of a lexical category prepositions that has recently witnessed a plethora of investigations. The volume approaches the issue first from a more general perspective, namely the extent to which insights into the meaning of prepositions give clues to the semantic struc
A selection of papers presented at the international conference Applied Logic: Logic at Work', held in Amsterdam in December 1992. Nowadays, the term applied logic' has a very wide meaning, as numerous applications of logical methods in computer science, formal linguistics and other fields testify. Such applications are by no means restricted to the use of known logical techniques: at its best, applied logic involves a back-and-forth dialogue between logical theory and the problem domain. The papers focus on the application of logic to the study of natural language, in syntax, semantics and pragmatics, and the effect of these studies on the development of logic. In the last decade, the dynamic nature of natural language has been the most interesting challenge for logicians. Dynamic semantics is here applied to new topics, the dynamic approach is extended to syntax, and several methodological issues in dynamic semantics are systematically investigated. Other methodological issues in the formal studies of natural language are discussed, such as the need for types, modal operators and other logical operators in the formal framework. Further articles address the scope of these methodological issues from other perspectives ranging from cognition to computation. The volume presents papers that are interesting for graduate students and researchers in the field of logic, philosophy of language, formal semantics and pragmatics, and computational linguistics.
|
![]() ![]() You may like...
Proceedings of the Fourth International…
Mohan S., S. Sureshkumar
Hardcover
R4,481
Discovery Miles 44 810
Handbook of Research on Recent…
Siddhartha Bhattacharyya, Nibaran Das, …
Hardcover
R9,795
Discovery Miles 97 950
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
Python Programming for Computations…
Computer Language
Hardcover
Eyetracking and Applied Linguistics
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R900
Discovery Miles 9 000
Annotation, Exploitation and Evaluation…
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R991
Discovery Miles 9 910
|