![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
The lexicon is now a major focus of research in computational linguistics and natural language processing (NLP), as more linguistic theories concentrate on the lexicon and as the acquisition of an adequate vocabulary has become the chief bottleneck in developing practical NLP systems. This collection describes techniques of lexical representation within a unification-based framework and their linguistic application, concentrating on the issue of structuring the lexicon using inheritance and defaults. Topics covered include typed feature structures, default unification, lexical rules, multiple inheritance and non-monotonic reasoning. The contributions describe both theoretical results and implemented languages and systems, including DATR, the Stuttgart TFS and ISSCO's ELU. This book arose out of a workshop on default inheritance in the lexicon organized as a part of the Esprit ACQUILEX project on computational lexicography. Besides the contributed papers mentioned above, it contains a detailed description of the ACQUILEX lexical knowledge base (LKB) system and its use in the representation of lexicons extracted semi-automatically from machine-readable dictionaries.
This book develops a formal computational theory of writing systems. It offers specific proposals about the linguistic objects that are represented by orthographic elements; what levels of linguistic representation are involved and how they may differ across writing systems; and what formal constraints hold of the mapping relation between linguistic and orthographic elements. Based on the insights gained, Sproat then proposes a taxonomy of writing systems. The treatment of theoretical linguistic issues and their computational implementation is complemented with discussion of empirical psycholinguistic work on reading and its relevance for the computational model developed here. Throughout, the model is illustrated with a number of detailed case studies of writing systems around the world. This book will be of interest to students and researchers in a variety of fields, including theoretical and computational linguistics, the psycholinguistics of reading and writing, and speech technology.
A primary problem in the area of natural language processing has been that of semantic analysis. This book aims to look at the semantics of natural languages in context. It presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics. This results in distinct advantages for relating the semantic analysis of a sentence to its context. A key feature is the clear separation of the lexical entries that represent the domain-specific linguistic information from the semantic interpreter that performs the analysis. The criteria for defining the lexical entries are firmly grounded in current linguistic theories, facilitating integration with existing parsers. This approach has been implemented and tested in Prolog on a domain for physics word problems and full details of the algorithms and code are presented. Semantic Processing for Finite Domains will appeal to postgraduates and researchers in computational linguistics, and to industrial groups specializing in natural language processing.
This book explains how to build Natural Language Generation (NLG) systems - computer software systems which use techniques from artificial intelligence and computational linguistics to automatically generate understandable texts in English or other human languages, either in isolation or as part of multimedia documents, Web pages, and speech output systems. Typically starting from some non-linguistic representation of information as input, NLG systems use knowledge about language and the application domain to automatically produce documents, reports, explanations, help messages, and other kinds of texts. The book covers the algorithms and representations needed to perform the core tasks of document planning, microplanning, and surface realization, using a case study to show how these components fit together. It also discusses engineering issues such as system architecture, requirements analysis, and the integration of text generation into multimedia and speech output systems.
This is a collection of new papers by leading researchers on natural language parsing. In the past, the problem of how people parse the sentences they hear - determine the identity of the words in these sentences and group these words into larger units - has been addressed in very different ways by experimental psychologists, by theoretical linguists, and by researchers in artificial intelligence, with little apparent relationship among the solutions proposed by each group. However, because of important advances in all these disciplines, research on parsing in each of these fields now seems to have something significant to contribute to the others, as this volume demonstrates. The volume includes some papers applying the results of experimental psychological studies of parsing to linguistic theory, others which present computational models of parsing, and a mathematical linguistics paper on tree-adjoining grammars and parsing.
This book investigates the nature of generalization in language and
examines how language is known by adults and acquired by children.
It looks at how and why constructions are learned, the relation
between their forms and functions, and how cross-linguistic and
language-internal generalizations about them can be explained.
People often mean more than they say. Grammar on its own is typically insufficient for determining the full meaning of an utterance; the assumption that the discourse is coherent or 'makes sense' has an important role to play in determining meaning as well. Logics of Conversation presents a dynamic semantic framework called Segmented Discourse Representation Theory, or SDRT, where this interaction between discourse coherence and discourse interpretation is explored in a logically precise manner. Combining ideas from dynamic semantics, commonsense reasoning and speech act theory, SDRT uses its analysis of rhetorical relations to capture intuitively compelling implicatures. It provides a computable method for constructing these logical forms and is one of the most formally precise and linguistically grounded accounts of discourse interpretation currently available. The book will be of interest to researchers and students in linguistics and in philosophy of language.
This book presents recent advances by leading researchers in computational modelling of language acquisition. The contributors, from departments of linguistics, cognitive science, psychology, and computer science, combine powerful computational techniques with real data and in doing so throw new light on the operations of the brain and the mind. They explore the extent to which linguistic structure is innate and/or available in a child's environment, and the degree to which language learning is inductive or deductive. They assess the explanatory power of different models. The book will appeal to all those working in language acquisition.
This volume is a collection of original contributions that address the problem of words and their meaning. This represents a still difficult and controversial area within various disciplines: linguistics, philosophy, and artificial intelligence. Although all of these disciplines have to tackle the issue, so far there is no overarching methodology agreed upon by researchers. The aim of the volume is to provide answers based on empirical linguistics methods that are relevant across all the disciplines and provide a bridge among researchers looking at word meaning from different angles.
Anaphora is a central topic in syntax, semantics, and pragmatics and to the interface between them. It is the subject of advanced undergraduate and graduate courses in linguistics and computational linguistics. In this book, Yan Huang provides an extensive and accessible overview of the major contemporary issues surrounding anaphora and gives a critical survey of the many and diverse contemporary approaches to it. He also provides by far the fullest cross-linguistic account of anaphora yet published.
Understanding any communication depends on the listener or reader recognizing that some words refer to what has already been said or written (his, its, he, there, etc.). This mode of reference, anaphora, involves complicated cognitive and syntactic processes, which people usually perform unerringly, but which present formidable problems for the linguist and cognitive scientist trying to explain precisely how comprehension is achieved. Anaphora is thus a central research focus in syntactic and semantic theory, while understanding and modelling its operation in discourse are important targets in computational linguistics and cognitive science. Yan Huang provides an extensive and accessible overview of the major contemporary issues surrounding anaphora and gives a critical survey of the many and diverse contemporary approaches to it. He provides by far the fullest cross-linguistic account yet published: Dr Huang's survey and analysis are based on a rich collection of data drawn from around 450 of the world's languages.
One of the most hotly debated phenomena in natural language is that of leftward argument scrambling. This book investigates the properties of Hindi-Urdu scrambling to show that it must be analyzed as uniformly a focality-driven XP-adjunction operation. It proposes a novel theory of binding and coreference that not only derives the coreference effects in scrambled constructions, but has important consequences for the proper formulation of binding, crossover, reconstruction, and representational economy in the minimalist program. The book will be of interest not only to specialists in Hindi-Urdu syntax and/or scrambling, but to all students of generative syntax.
Originally published in 1997, this book is concerned with human language technology. This technology provides computers with the capability to handle spoken and written language. One major goal is to improve communication between humans and machines. If people can use their own language to access information, working with software applications and controlling machinery, the greatest obstacle for the acceptance of new information technology is overcome. Another important goal is to facilitate communication among people. Machines can help to translate texts or spoken input from one human language to the other. Programs that assist people in writing by checking orthography, grammar and style are constantly improving. This book was sponsored by the Directorate General XIII of the European Union and the Information Science and Engineering Directorate of the National Science Foundation, USA.
This 1992 collection takes the exciting step of examining natural language phenomena from the perspective of both computational linguistics and formal semantics. Computational linguistics has until now been primarily concerned with the construction of computational models for handling the complexities of linguistic form, but has not tackled the questions of representing or computing meaning. Formal semantics, on the other hand, has attempted to account for the relations between forms and meanings, without necessarily attending to computational concerns. The book introduces the reader to the two disciplines and considers the prospects for the more unified and comprehensive computational theory of language which might obtain from their amalgamation. Of great interest to those working in the fields of computation, logic, semantics, artificial intelligence and linguistics generally.
Semantic interpretation and the resolution of ambiguity presents an important advance in computer understanding of natural language. While parsing techniques have been greatly improved in recent years, the approach to semantics has generally improved in recent years, the approach to semantics has generally been ad hoc and had little theoretical basis. Graeme Hirst offers a new, theoretically motivated foundation for conceptual analysis by computer, and shows how this framework facilitates the resolution of lexical and syntactic ambiguities. His approach is interdisciplinary, drawing on research in computational linguistics, artificial intelligence, montague semantics, and cognitive psychology.
A primary problem in the area of natural language processing has been that of semantic analysis. This book aims to look at the semantics of natural languages in context. It presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics. This results in distinct advantages for relating the semantic analysis of a sentence to its context. A key feature is the clear separation of the lexical entries that represent the domain-specific linguistic information from the semantic interpreter that performs the analysis. The criteria for defining the lexical entries are firmly grounded in current linguistic theories, facilitating integration with existing parsers. This approach has been implemented and tested in Prolog on a domain for physics word problems and full details of the algorithms and code are presented. Semantic Processing for Finite Domains will appeal to postgraduates and researchers in computational linguistics, and to industrial groups specializing in natural language processing.
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computational linguistics, artificial intelligence and the philosophy of language will want to read this book.
This book provides a computational re-evaluation of the genealogical relations between the early Germanic families and of their diversification from their most recent common ancestor, Proto-Germanic. It also proposes a novel computational approach to the problem of linguistic diversification more broadly, using agent-based simulation of speech communities over time. This new method is presented alongside more traditional phylogenetic inference, and the respective results are compared and evaluated. Frederik Hartmann demonstrates that the traditional and novel methods each capture different aspects of this highly complex real-world process; crucially, the new computational approach proposed here offers a new way of investigating the wave-like properties of language relatedness that were previously less accessible. As well as validating the findings of earlier research, the results of this study also generate new insights and shed light on much-debated issues in the field. The conclusion is that the break-up of Germanic should be understood as a gradual disintegration process in which tree-like branching effects are rare.
The topic of this book is the theoretical foundations of a theory LSLT -- Lexical Semantic Language Theory - and its implementation in a the system for text analysis and understanding called GETARUN, developed at the University of Venice, Laboratory of Computational Linguistics, Department of Language Sciences. LSLT encompasses a psycholinguistic theory of the way the language faculty works, a grammatical theory of the way in which sentences are analysed and generated -- for this we will be using Lexical-Functional Grammar -- a semantic theory of the way in which meaning is encoded and expressed in utterances -- for this we will be using Situation Semantics -, and a parsing theory of the way in which components of the theory interact in a common architecture to produce the needed language representation to be eventually spoken aloud or interpreted by the phonetic/acoustic language interface. LSLT will then be put to use to show how discourse relations are mapped automatically from text using the tools available in the 4 sub-theories, and in particular we will focus on Causal Relations showing how the various sub-theories contribute to address different types of causality.
This book is an advanced introduction to semantics that presents this crucial component of human language through the lens of the 'Meaning-Text' theory - an approach that treats linguistic knowledge as a huge inventory of correspondences between thought and speech. Formally, semantics is viewed as an organized set of rules that connect a representation of meaning (Semantic Representation) to a representation of the sentence (Deep-Syntactic Representation). The approach is particularly interesting for computer assisted language learning, natural language processing and computational lexicography, as our linguistic rules easily lend themselves to formalization and computer applications. The model combines abstract theoretical constructions with numerous linguistic descriptions, as well as multiple practice exercises that provide a solid hands-on approach to learning how to describe natural language semantics.
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
In everyday communication, Europe's citizens, business partners and politicians are inevitably confronted with language barriers. Language technology has the potential to overcome these barriers and to provide innovative interfaces to technologies and knowledge. This document presents a Strategic Research Agenda for Multilingual Europe 2020. The agenda was prepared by META-NET, a European Network of Excellence. META-NET consists of 60 research centres in 34 countries, who cooperate with stakeholders from economy, government agencies, research organisations, non-governmental organisations, language communities and European universities. META-NET's vision is high-quality language technology for all European languages. "The research carried out in the area of language technology is of utmost importance for the consolidation of Portuguese as a language of global communication in the information society." - Dr. Pedro Passos Coelho (Prime-Minister of Portugal) "It is imperative that language technologies for Slovene are developed systematically if we want Slovene to flourish also in the future digital world." - Dr. Danilo Turk (President of the Republic of Slovenia) "For such small languages like Latvian keeping up with the ever increasing pace of time and technological development is crucial. The only way to ensure future existence of our language is to provide its users with equal opportunities as the users of larger languages enjoy. Therefore being on the forefront of modern technologies is our opportunity." - Valdis Dombrovskis (Prime Minister of Latvia) "Europe's inherent multilingualism and our scientific expertise are the perfect prerequisites for significantly advancing the challenge that language technology poses. META-NET opens up new opportunities for the development of ubiquitous multilingual technologies." - Prof. Dr. Annette Schavan (German Minister of Education and Research)
Die Entwicklung und Verbreitung von Systemen fur maschinelles UEbersetzen bewirkt massive Transformationsprozesse in der Sprachdienstleistungsbranche. Die 'Maschinisierung' von Translation fuhrt nicht nur zu Umwalzungen innerhalb des UEbersetzungsmarktes, sondern stellt uns auch vor die grundlegende Frage: Was ist 'UEbersetzen', wenn eine Maschine menschliche Sprache ubersetzt? Diese Arbeit widmet sich diesem Problem aus der Perspektive der Translationswissenschaft und der Techniksoziologie. Im Fokus stehen Translationskonzepte in der Computerlinguistik, die aus einer Wechselwirkung zwischen sozialer Konstruktion und technischen Gegebenheiten resultieren. Der UEbersetzungsbegriff von Computerlinguist:innen orientiert sich an der Mechanik der Maschine, wodurch ein Spannungsverhaltnis mit den Paradigmen der Humantranslation entsteht.
In diesem Open-Access-Buch wird mithilfe eines grossangelegten Online-Experiments untersucht, wie sich die Anzeige von Zitationen oder Downloads auf die Relevanzbewertung in akademischen Suchsystemen auswirkt. Bei der Suche nach Informationen verwenden Menschen diverse Kriterien, anhand derer sie die Relevanz der Suchergebnisse bewerten. In diesem Buch wird erstmals eine systematische UEbersicht uber die Einflusse im Prozess der Relevanzbewertung von Suchergebnissen in akademischen Suchsystemen aufgezeigt. Zudem wird ein anspruchsvolles und komplexes Methodenframework zur experimentellen Untersuchung von Relevanzkriterien vorgestellt. Dieses eignet sich fur die weitergehende Erforschung von Relevanzkriterien im informationswissenschaftlichen Bereich.
This book is an advanced introduction to semantics that presents this crucial component of human language through the lens of the 'Meaning-Text' theory - an approach that treats linguistic knowledge as a huge inventory of correspondences between thought and speech. Formally, semantics is viewed as an organized set of rules that connect a representation of meaning (Semantic Representation) to a representation of the sentence (Deep-Syntactic Representation). The approach is particularly interesting for computer assisted language learning, natural language processing and computational lexicography, as our linguistic rules easily lend themselves to formalization and computer applications. The model combines abstract theoretical constructions with numerous linguistic descriptions, as well as multiple practice exercises that provide a solid hands-on approach to learning how to describe natural language semantics. |
![]() ![]() You may like...
Net-Centric Approaches to Intelligence…
Roy Ladner, Frederick E. Petry
Hardcover
R2,979
Discovery Miles 29 790
Data and Application Security…
B. Thuraisingham, Reind van de Riet, …
Hardcover
R5,803
Discovery Miles 58 030
Artificial Intelligence and Security in…
Jerzy Soldek, Leszek Drobiazgiewicz
Hardcover
R4,530
Discovery Miles 45 300
Intrusion Detection Systems
Roberto Di Pietro, Luigi V. Mancini
Hardcover
R4,585
Discovery Miles 45 850
Information Hiding: Steganography and…
Neil F. Johnson, Zoran Duric, …
Hardcover
R2,963
Discovery Miles 29 630
|