![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Dieser Band ist der Bericht von einer Tagung zum Thema Verarbeitung natA1/4rlicher Sprache am Computer. Er enthAlt Lang- und KurzbeitrAge fA1/4hrender Wissenschaftler aus dem deutschsprachigen Raum sowie aus den USA. Alle Teilbereiche der Sprachverarbeitung wie Morphologie, Parsing, semantische Analyse und Verarbeitung gesprochener Sprache werden abgedeckt. Das Ziel der Tagung war eine Darstellung des Themas, die die Verarbeitung der deutschen Sprache in den Mittelpunkt rA1/4ckt. So behandeln die BeitrAge einige speziell fA1/4r das Deutsche entwickelte Systeme, sowie Adaptierungen von fA1/4r das Englische bewAhrten Formalismen fA1/4r die Anwendung auf das Deutsche. Dadurch liefert dieses Buch zum ersten Mal eine kompakte Zusammenstellung der neuesten Forschungsergebnisse unter diesem speziellen Gesichtspunkt.
This series will include monographs and collections of studies devoted to the investigation and exploration of knowledge, information and data-processing systems of all kinds, no matter whether human, (other) animal or machine. Its scope is intended to span the full range of interests from classical problems in the philosophy of mind and phi losophical psychology through issues in cognitive psychology and socio biology (concerning the mental capabilities of other species) to ideas related to artificial intelligence and computer science. While primary emphasis will be placed upon theoretical, conceptual and epistemologi cal aspects of these problems and domains, empirical, experimental and methodological studies will also appear from time to time. Among the most challenging and difficult projects within the scope of artificial intelligence is the development and implementation of com puter programs suitable for processing natural language. Our purpose in compiling the present volume has been to contribute to the foundations of this enterprise by bringing together classic papers devoted to crucial problems involved in understanding natural language, which range from issues of formal syntax and logical form to those of possible-worlds and situation semantics. The book begins with a comprehensive introduc tion composed by Jack Kulas, the senior editor of this work, which pro vides a systematic orientation to this complex field, and ends with a selected bibliography intended to promote further research. If our efforts assist others in dealing with these problems, they will have been worthwhile. J. H. F."
This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "Commonsense Reasoning with Verbs" (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations."
Ever since Chomsky laid the framework for a mathematically formal theory of syntax, two classes of formal models have held wide appeal. The finite state model offered simplicity. At the opposite extreme numerous very powerful models, most notable transformational grammar, offered generality. As soon as this mathematical framework was laid, devastating arguments were given by Chomsky and others indicating that the finite state model was woefully inadequate for the syntax of natural language. In response, the completely general transformational grammar model was advanced as a suitable vehicle for capturing the description of natural language syntax. While transformational grammar seems likely to be adequate to the task, many researchers have advanced the argument that it is "too adequate. " A now classic result of Peters and Ritchie shows that the model of transformational grammar given in Chomsky's Aspects IJ is powerful indeed. So powerful as to allow it to describe any recursively enumerable set. In other words it can describe the syntax of any language that is describable by any algorithmic process whatsoever. This situation led many researchers to reasses the claim that natural languages are included in the class of transformational grammar languages. The conclu sion that many reached is that the claim is void of content, since, in their view, it says little more than that natural language syntax is doable algo rithmically and, in the framework of modern linguistics, psychology or neuroscience, that is axiomatic."
Natural language dialogue is a continuous, unified phenomenon. Speakers use their conversational context to simplify individual utterances through a number of linguistic devices, including ellipsis and definite references. Yet most computational systems for using natural language treat individual utterances as separate entities, and have distinctly separate processes for handling ellipsis, definite references, and other dialogue phenomena. This book, a slightly revised version of the Ph. D. dissertation that I completed in December 1986, describes a different approach. It presents a computational system, Psli3, that uses the uniform framework of a production system architecture to carry out natural language understanding and generation in a well-integrated way. This is demonstrated primarily through intersentential ellipsis resolution, in addition to examples of definite reference resolution and interactive error correction. The system's conversational context arises naturally as the result of the persistence of the internal representations of previous utterances in working memory. Natural language input is interpreted within this framework using a modification of the syntactic technique of chart parsing, extended to include semantics, and adapted to the production system architecture. This technique, called semantic chart parsing, provides a graceful way of handling ambiguity within this architecture, and allows separate knowledge sources to interact smoothly across different utterances in a highly integrated fashion. xvi Integrated Natural Language Dialogue The design of this system demonstrates how flexible and natural user interactions can be carried out using a system with a naturally flexible control structure.
Ever since Chomsky laid the framework for a mathematically formal theory of syntax, two classes of formal models have held wide appeal. The finite state model offered simplicity. At the opposite extreme numerous very powerful models, most notable transformational grammar, offered generality. As soon as this mathematical framework was laid, devastating arguments were given by Chomsky and others indicating that the finite state model was woefully inadequate for the syntax of natural language. In response, the completely general transformational grammar model was advanced as a suitable vehicle for capturing the description of natural language syntax. While transformational grammar seems likely to be adequate to the task, many researchers have advanced the argument that it is "too adequate. " A now classic result of Peters and Ritchie shows that the model of transformational grammar given in Chomsky's Aspects IJ is powerful indeed. So powerful as to allow it to describe any recursively enumerable set. In other words it can describe the syntax of any language that is describable by any algorithmic process whatsoever. This situation led many researchers to reasses the claim that natural languages are included in the class of transformational grammar languages. The conclu sion that many reached is that the claim is void of content, since, in their view, it says little more than that natural language syntax is doable algo rithmically and, in the framework of modern linguistics, psychology or neuroscience, that is axiomatic."
Authors and Participants xi I Pragmatic Aspects 1 1. Some pragmatic decision criteria in generation 3 EduardH. Hovy 2. How to appear to be conforming to the 'maxims' even if you prefer to violate them 19 Antlwny Jameson 43 3. Contextual effects on responses to misconceptions Kathleen F. McCoy 4. Generating understandable explanatory sentences 55 Domenico Parisi & Donatella Ferrante 5. Toward a plan-based theory of referring actions 63 Douglas E. Appelt Generating referring expressions and pointing gestures 71 6. Norben Reithinger II Generation of Connected Discourse 83 7. Rhetorical Structure Theory: description and construction of text structures 85 William C. Mann & Sandra A. Tlwmpson 8. Discourse strategies for describing complex physical objects 97 Cecile L. Paris & Kathleen R. McKeown 9. Strategies for generating coherent descriptions of object movements in street scenes 117 Hans-Joachim Novak 133 10. The automated news agency: SEMTEX - a text generator for German Dietmar ROsner 149 11. A connectionist approach to the generation of abstracts KOiti Hasida, Shun Ishizald & Hitoshi Isahara III Generator Design 157 159 12. Factors contributing to efficiency in natural language generation DavidD. McDonald, Marie M. Vaughan & James D. Pustejovsky 183 13. Reviewing as a component of the text generation process Masoud Yazdani A French and English syntactic component for generation 191 14. Laurence Danlos KING: a knowledge-intensive natural language generator 219 15. Paul S. Jacobs vii 231 IV Grammars and Grammatical Formalisms 233 16. The relevance of Tree Adjoining Grammar to generation Aravind K.
The advent of computer aided design and the proliferation of computer aided design tools have been instrumental in furthering the state-of-the art in integrated circuitry. Continuing this progress, however, demands an emphasis on creating user-friendly environments that facilitate the interaction between the designer and the CAD tool. The realization of this fact has prompted investigations into the appropriateness for CAD of a number of user-interface technologies. One type of interface that has hitherto not been considered is the natural language interface. It is our contention that natural language interfaces could solve many of the problems posed by the increasing number and sophistication of CAD tools. This thesis represents the first step in a research effort directed towards the eventual development of a natural language interface for the domain of computer aided design. The breadth and complexity of the CAD domain renders the task of developing a natural language interface for the complete domain beyond the scope of a single doctoral thesis. Hence, we have initally focussed on a sub-domain of CAD. Specifically, we have developed a natural language interface, named Cleopatra, for circuit-simulation post-processing. In other words, with Cleopatra a circuit-designer can extract and manipulate, in English, values from the output of a circuit-simulator (currently SPICE) without manually having to go through the output files produced by the simulator."
Originally published in 1992, when connectionist natural language processing (CNLP) was a new and burgeoning research area, this book represented a timely assessment of the state of the art in the field. It includes contributions from some of the best known researchers in CNLP and covers a wide range of topics. The book comprises four main sections dealing with connectionist approaches to semantics, syntax, the debate on representational adequacy, and connectionist models of psycholinguistic processes. The semantics and syntax sections deal with a variety of approaches to issues in these traditional linguistic domains, covering the spectrum from pure connectionist approaches to hybrid models employing a mixture of connectionist and classical AI techniques. The debate on the fundamental suitability of connectionist architectures for dealing with natural language processing is the focus of the section on representational adequacy. The chapters in this section represent a range of positions on the issue, from the view that connectionist models are intrinsically unsuitable for all but the associationistic aspects of natural language, to the other extreme which holds that the classical conception of representation can be dispensed with altogether. The final section of the book focuses on the application of connectionist models to the study of psycholinguistic processes. This section is perhaps the most varied, covering topics from speech perception and speech production, to attentional deficits in reading. An introduction is provided at the beginning of each section which highlights the main issues relating to the section topic and puts the constituent chapters into a wider context.
This book is for developers who are looking for an overview of basic concepts in Natural Language Processing. It casts a wide net of techniques to help developers who have a range of technical backgrounds. Numerous code samples and listings are included to support myriad topics. The first chapter shows you various details of managing data that are relevant for NLP. The next pair of chapters contain NLP concepts, followed by another pair of chapters with Python code samples to illustrate those NLP concepts. Chapter 6 explores applications, e.g., sentiment analysis, recommender systems, COVID-19 analysis, spam detection, and a short discussion regarding chatbots. The final chapter presents the Transformer architecture, BERT-based models, and the GPT family of models, all of which were developed during the past three years and considered SOTA ("state of the art"). The appendices contain introductory material (including Python code samples) on regular expressions and probability/statistical concepts. Companion files with source code and figures are included. FEATURES: Covers extensive topics related to natural language processing Includes separate appendices on regular expressions and probability/statistics Features companion files with source code and figures from the book.
Parsing Efficiency is crucial when building practical natural language systems. 'Ibis is especially the case for interactive systems such as natural language database access, interfaces to expert systems and interactive machine translation. Despite its importance, parsing efficiency has received little attention in the area of natural language processing. In the areas of compiler design and theoretical computer science, on the other hand, parsing algorithms 3 have been evaluated primarily in terms of the theoretical worst case analysis (e.g. lXn", and very few practical comparisons have been made. This book introduces a context-free parsing algorithm that parses natural language more efficiently than any other existing parsing algorithms in practice. Its feasibility for use in practical systems is being proven in its application to Japanese language interface at Carnegie Group Inc., and to the continuous speech recognition project at Carnegie-Mellon University. This work was done while I was pursuing a Ph.D degree at Carnegie-Mellon University. My advisers, Herb Simon and Jaime Carbonell, deserve many thanks for their unfailing support, advice and encouragement during my graduate studies. I would like to thank Phil Hayes and Ralph Grishman for their helpful comments and criticism that in many ways improved the quality of this book. I wish also to thank Steven Brooks for insightful comments on theoretical aspects of the book (chapter 4, appendices A, B and C), and Rich Thomason for improving the linguistic part of tile book (the very beginning of section 1.1).
'A must-read' New Scientist 'Fascinating' Greta Thunberg 'Enthralling' George Monbiot 'Brilliant' Philip Hoare A thrilling investigation into the pioneering world of animal communication, where big data and artificial intelligence are changing our relationship with animals forever In 2015, wildlife filmmaker Tom Mustill was whale watching when a humpback breached onto his kayak and nearly killed him. After a video clip of the event went viral, Tom found himself inundated with theories about what happened. He became obsessed with trying to find out what the whale had been thinking and sometimes wished he could just ask it. In the process of making a film about his experience, he discovered that might not be such a crazy idea. This is a story about the pioneers in a new age of discovery, whose cutting-edge developments in natural science and technology are taking us to the brink of decoding animal communication - and whales, with their giant mammalian brains and sophisticated vocalisations, offer one of the most realistic opportunities for us to do so. Using 'underwater ears,' robotic fish, big data and machine intelligence, leading scientists and tech-entrepreneurs across the world are working to turn the fantasy of Dr Dolittle into a reality, upending much of what we know about these mysterious creatures. But what would it mean if we were to make contact? And with climate change threatening ever more species with extinction, would doing so alter our approach to the natural world? Enormously original and hugely entertaining, How to Speak Whale is an unforgettable look at how close we truly are to communicating with another species - and how doing so might change our world beyond recognition.
Description Modern NLP techniques based on machine learning radically improve the ability of software to recognize patterns, use context to infer meaning, and accurately discern intent from poorly-structured text. In Natural Language Processing in Action, readers explore carefully chosen examples and expand their machine's knowledge which they can then apply to a range of challenges. Key Features * Easy-to-follow * Clear examples * Hands-on-guide Audience A basic understanding of machine learning and some experience with a modern programming language such as Python, Java, C++, or JavaScript will be helpful. About the technology Natural Language Processing (NLP) is the discipline of teaching computers to read more like people, and readers can see examples of it in everything from chatbots to the speech-recognition software on their phone. Hobson Lane has more than 15 years of experience building autonomous systems that make important decisions on behalf of humans. Hannes Hapke is an Electrical Engineer turned Data Scientist with experience in deep learning. Cole Howard is a carpenter and writer turned Deep Learning expert.
This book contains a comprehensive treatment of advanced LaTeX features. The focus is on the development of high quality documents and presentations, by revealing powerful insights into the LaTeX language. The well-established advantages of the typesetting system LaTeX are the preparation and publication of platform-independent high-quality documents and automatic numbering and cross-referencing of illustrations or references. These can be extended beyond the typical applications, by creating highly dynamic electronic documents. This is commonly performed in connection with the portable document format (PDF), as well as other programming tools which allow the development of extremely flexible electronic documents.
This volume presents papers in English and German looking at the area of language processing and speech technology. The following subjects were discussed: modelling, cognition, perception and behaviour; langauge and speech systems; multilingual research and developments; prosody; syntax, morphology, lexicon; semantics; formalisms and parsing; and tools for development and teaching.
Edited in collaboration with FoLLI, the Association of Logic, Language and Information this book constitutes the refereed proceedings of the 28th Workshop on Logic, Language, Information and Computation, WoLLIC 2022, Iasi, Romania, in September 2022. The 25 full papers presented included with 8 extra abstracts, 5 invited talks and 3 tutorials were fully reviewed and selected from 46 submissions. The conference aims fostering interdisciplinary research in pure and applied logic.
This book constitutes the proceedings of the 16th International Conference on Theoretical Aspects of Software Engineering, TASE 2022, held in Cluj-Napoca, Romania, July 2022. The 21 full regular papers presented together with 5 short papers in this book were carefully reviewed and selected from 71 submissions. The topics of the papers covering various fields in software engineering and the latest developments in in formal and theoretical software engineering methods and techniques.
This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications.
This book covers deep-learning-based approaches for sentiment analysis, a relatively new, but fast-growing research area, which has significantly changed in the past few years. The book presents a collection of state-of-the-art approaches, focusing on the best-performing, cutting-edge solutions for the most common and difficult challenges faced in sentiment analysis research. Providing detailed explanations of the methodologies, the book is a valuable resource for researchers as well as newcomers to the field.
From tech giants to plucky startups, the world is full of companies boasting that they are on their way to replacing human interpreters, but are they right? Interpreters vs Machines offers a solid introduction to recent theory and research on human and machine interpreting, and then invites the reader to explore the future of interpreting. With a foreword by Dr Henry Liu, the 13th International Federation of Translators (FIT) President, and written by consultant interpreter and researcher Jonathan Downie, this book offers a unique combination of research and practical insight into the field of interpreting. Written in an innovative, accessible style with humorous touches and real-life case studies, this book is structured around the metaphor of playing and winning a computer game. It takes interpreters of all experience levels on a journey to better understand their own work, learn how computers attempt to interpret and explore possible futures for human interpreters. With five levels and split into 14 chapters, Interpreters vs Machines is key reading for all professional interpreters as well as students and researchers of Interpreting and Translation Studies, and those with an interest in machine interpreting. |
![]() ![]() You may like...
Natural Language Processing - Concepts…
Information Reso Management Association
Hardcover
R11,202
Discovery Miles 112 020
Speech and Language Technology for…
Katharine Beals, Deborah Dahl, …
Hardcover
R2,272
Discovery Miles 22 720
|