![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
The editors of the Applied Logic Series are happy to present to the reader the fifth volume in the series, a collection of papers on Logic, Language and Computation. One very striking feature of the application of logic to language and to computation is that it requires the combination, the integration and the use of many diverse systems and methodologies - all in the same single application. The papers in this volume will give the reader a glimpse into the problems of this active frontier of logic. The Editors CONTENTS Preface IX 1. S. AKAMA Recent Issues in Logic, Language and Computation 1 2. M. J. CRESSWELL Restricted Quantification 27 3. B. H. SLATER The Epsilon Calculus' Problematic 39 4. K. VON HEUSINGER Definite Descriptions and Choice Functions 61 5. N. ASHER Spatio-Temporal Structure in Text 93 6. Y. NAKAYAMA DRT and Many-Valued Logics 131 7. S. AKAMA On Constructive Modality 143 8. H. W ANSING Displaying as Temporalizing: Sequent Systems for Subintuitionistic Logics 159 9. L. FARINAS DEL CERRO AND V. LUGARDON 179 Quantification and Dependence Logics 10. R. SYLVAN Relevant Conditionals, and Relevant Application Thereof 191 Index 245 Preface This is a collection of papers by distinguished researchers on Logic, Lin guistics, Philosophy and Computer Science. The aim of this book is to address a broad picture of the recent research on related areas. In particular, the contributions focus on natural language semantics and non-classical logics from different viewpoints."
Trajectories through Knowledge Space: A Dynamic Framework for Machine Comprehension provides an overview of many of the main ideas of connectionism (neural networks) and probabilistic natural language processing. Several areas of common overlap between these fields are described in which each community can benefit from the ideas and techniques of the other. The author's perspective on comprehension pulls together the most significant research of the last ten years and illustrates how we can move more forward onto the next level of intelligent text processing systems. A central focus of the book is the development of a framework for comprehension connecting research themes from cognitive psychology, cognitive science, corpus linguistics and artificial intelligence. The book proposes a new architecture for semantic memory, providing a framework for addressing the problem of how to represent background knowledge in a machine. This architectural framework supports a computational model of comprehension.Trajectories through Knowledge Space: A Dynamic Framework for Machine Comprehension is an excellent reference for researchers and professionals, and may be used as an advanced text for courses on the topic.
Speech--to--Speech Translation: a Massively Parallel Memory-Based Approach describes one of the world's first successful speech--to--speech machine translation systems. This system accepts speaker-independent continuous speech, and produces translations as audio output. Subsequent versions of this machine translation system have been implemented on several massively parallel computers, and these systems have attained translation performance in the milliseconds range. The success of this project triggered several massively parallel projects, as well as other massively parallel artificial intelligence projects throughout the world. Dr. Hiroaki Kitano received the distinguished 'Computers and Thought Award' from the International Joint Conferences on Artificial Intelligence in 1993 for his work in this area, and that work is reported in this book.
1. Structuralist Versus Analogical Descriptions ONE important purpose of this book is to compare two completely dif ferent approaches to describing language. The first of these approaches, commonly called stnlctllralist, is the traditional method for describing behavior. Its methods are found in many diverse fields - from biological taxonomy to literary criticism. A structuralist description can be broadly characterized as a system of classification. The fundamental question that a structuralist description attempts to answer is how a general contextual space should be partitioned. For each context in the partition, a rule is defined. The rule either specifies the behavior of that context or (as in a taxonomy) assigns a name to that context. Structuralists have implicitly assumed that descriptions of behavior should not only be correct, but should also minimize the number of rules and permit only the simplest possible contextual specifications. It turns out that these intuitive notions can actually be derived from more fundamental statements about the uncertainty of rule systems. Traditionally, linguistic analyses have been based on the idea that a language is a system of rules. Saussure, of course, is well known as an early proponent of linguistic structuralism, as exemplified by his characterization of language as "a self-contained whole and principle of classification" (Saussure 1966:9). Yet linguistic structuralism did not originate with Saussure - nor did it end with "American structuralism.""
Connection science is a new information-processing paradigm which attempts to imitate the architecture and process of the brain, and brings together researchers from disciplines as diverse as computer science, physics, psychology, philosophy, linguistics, biology, engineering, neuroscience and AI. Work in Connectionist Natural Language Processing (CNLP) is now expanding rapidly, yet much of the work is still only available in journals, some of them quite obscure. To make this research more accessible this book brings together an important and comprehensive set of articles from the journal CONNECTION SCIENCE which represent the state of the art in Connectionist natural language processing; from speech recognition to discourse comprehension. While it is quintessentially Connectionist, it also deals with hybrid systems, and will be of interest to both theoreticians as well as computer modellers. Range of topics covered: Connectionism and Cognitive Linguistics Motion, Chomsky's Government-binding Theory Syntactic Transformations on Distributed Representations Syntactic Neural Networks A Hybrid Symbolic/Connectionist Model for Understanding of Nouns Connectionism and Determinism in a Syntactic Parser Context Free Grammar Recognition Script Recognition with Hierarchical Feature Maps Attention Mechanisms in Language Script-Based Story Processing A Connectionist Account of Similarity in Vowel Harmony Learning Distributed Representations Connectionist Language Users Representation and Recognition of Temporal Patterns A Hybrid Model of Script Generation Networks that Learn about Phonological Features Pronunciation in Text-to-Speech Systems
The Generalized LR parsing algorithm (some call it "Tomita's algorithm") was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems based on existing parsing methods. Their parsing speed, however, always bothered me. I sometimes wondered whether it was ever possible to build a natural language parser that could parse reasonably long sentences in a reasonable time without help from large mainframe machines. At the same time, I was always amazed by the speed of programming language compilers, because they can parse very long sentences (i.e., programs) very quickly even on workstations. There are two reasons. First, programming languages are considerably simpler than natural languages. And secondly, they have very efficient parsing methods, most notably LR. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. So, the key to the LR efficiency is the grammar precompilation; something that had never been tried for natural languages in 1985. Of course, there was a good reason why LR had never been applied for natural languages; it was simply impossible. If your context-free grammar is sufficiently more complex than programming languages, its LR parsing table will have multiple actions, and deterministic parsing will be no longer possible.
Aiming at exemplifying the methodology of learner corpus profiling, this book describes salient features of Romanian Learner English. As a starting point, the volume offers a comprehensive presentation of the Romanian-English contrastive studies. Another innovative aspect of the book refers to the use of the first Romanian Corpus of Learner English, whose compilation is the object of a methodological discussion. In one of the main chapters, the book introduces the methodology of learner corpus profiling and compares it with existing approaches. The profiling approach is emphasised by corpus-based quantitative and qualitative investigations of Romanian Learner English. Part of the investigation is dedicated to the lexico-grammatical profiles of articles, prepositions and genitives. The frequency-based collocation analyses are integrated with error analyses and extended into error pattern samples. Furthermore, contrasting typical Romanian Learner English constructions with examples from the German and the Italian learner corpora opens the path to new contrastive interlanguage analyses.
This book grew out of the Fourth Conference on Computers and the Writing Process, held at the University of Sussex in March 1991. The conference brought together a wide variety of people interested in most aspects of computers and the writing process including, computers and writing education, computer supported fiction, computers and technical writing, evaluation of computer-based writing, and hypertext. Fifteen papers were selected from the twenty-five delivered at the conference. The authors were asked to develop them into articles, incorporating any insights they had gained from their conference presentations. This book offers a survey of the wide area of Computers and Writing, and describes current work in the design and use of computer-based tools for writing. University of Sussex M.S. October, 1991 Note from Publisher This collection of articles is being published simultaneously as a special issue, Volume 21(1-3), of Instructional Science - An International Journal of Learning and Cognition. Instructional Science 21: 1-4 (1992) 1 (c) Kluwer Academic Publishers, Dordrecht Introduction MIKE SHARPLES School of Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton BNl 9QH, United Kingdom.
The infonnation revolution is upon us. Whereas the industrial revolution heralded the systematic augmentation of human physical limitations by har nessing external energy sources, the infonnation revolution strives to augment human memory and mental processing limitations by harnessing external computational resources. Computers can accumulate. transmit and output much more infonnation and in a more timely fashion than more con ventional printed or spoken media. Of greater interest, however, is the computer's ability to process, classify and retrieve infonnation selectively in response to the needs of each human user. One cannot drink from the fire hydrant of infonnation without being immediately flooded with irrelevant text. Recent technological advances such as optical character readers only exacerbate the problem by increasing the volume of electronic text. Just as steam and internal combustion engines brought powerful energy sources under control to yield useful work in the industrial revolution, so must we build computational engines that control and apply the vast infonnation sources that they may yield useful knowledge. Information science is the study of systematic means to control, classify, process and retrieve vast amounts of infonnation in electronic fonn. In par ticular, several methodologies have been developed to classify texts manually by annies of human indexers, as illustrated quite clearly at the National Library ofMedicine, and many computational techniques have been developed to search textual data bases automatically, such as full-text keyword searches. In general."
Methods for studying writing processes have significantly developed over the last two decades. The rapid development of software tools which support the collection together with the display and analysis of writing process data and new input from various neighboring disciplines contribute to an increasingly detailed knowledge acquisition about the complex cognitive processes of writing. This volume, which focuses on research methods, mixed methods designs, conceptual considerations of writing process research, interdisciplinary research influences and the application of research methods in educational settings, provides an insight into the current status of the methodological development of writing process research in Europe.
Derivation or Representation? Hubert Haider & Klaus Netter 1 The Issue Derivation and Representation - these keywords refer both to a conceptual as well as to an empirical issue. Transformational grammar was in its outset (Chomsky 1957, 1975) a derivational theory which characterized a well-formed sentence by its derivation, i.e. a set of syntactic representations defined by a set of rules that map one representation into another. The set of mapping rules, the transformations, eventually became more and more abstract and were trivialized into a single one, namely "move a," a general movement-rule. The constraints on movement were singled out in systems of principles that ap ply to the resulting representations, i.e. the configurations containing a moved element and its extraction site, the trace. The introduction of trace-theory (d. Chomsky 1977, ch.3 17, ch. 4) in principle opened up the possibility of com pletely abandoning movement and generating the possible outputs of movement directly, i.e. as structures that contain gaps representing the extraction sites."
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
The volume brings together a selection of invited articles and papers presented at the 4th International CILC Conference held in Jaen, Spain, in March 2012. The chapters describe English using a range of corpora and other resources. There are two parts, one dealing with diachronic research and the other with synchronic research. Both parts investigate several aspects of the English language from various perspectives and illustrate the use of corpora in current research. The structure of the volume allows for the same linguistic aspect to be discussed both from the diachronic and the synchronic point of view. The chapters are also useful examples of corpus use as well as of use of other resources as corpus, specifically dictionaries. They investigate a broad array of issues, mainly using corpora of English as a native language, with a focus on corpus tools and corpus description.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
"Cognitive and Computational Strategies for Word Sense
Disambiguation" examines cognitive strategies by humans and
computational strategies by machines, for WSD in parallel.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
"Predicting Prosody from Text for Text-to-Speech Synthesis"covers thespecific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing."
Originally published in 1997, this book is concerned with human language technology. This technology provides computers with the capability to handle spoken and written language. One major goal is to improve communication between humans and machines. If people can use their own language to access information, working with software applications and controlling machinery, the greatest obstacle for the acceptance of new information technology is overcome. Another important goal is to facilitate communication among people. Machines can help to translate texts or spoken input from one human language to the other. Programs that assist people in writing by checking orthography, grammar and style are constantly improving. This book was sponsored by the Directorate General XIII of the European Union and the Information Science and Engineering Directorate of the National Science Foundation, USA.
Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the refereed proceedings of the 7th International Conference on Logical Aspects of Computational Linguistics, LACL 2012, held in Nantes, France, in July 2012. The 15 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 24 submissions. The papers are organized in topical sections on logical foundation of syntactic formalisms, logics for semantics of lexical items, sentences, discourse and dialog, applications of these models to natural language processing, type theoretic, proof theoretic, model theoretic and other logically based formal methods for describing natural language syntax, semantics and pragmatics, as well as the implementation of natural language processing software relying on such methods.
"Phonetic Search Methods for Large Databases" focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors' own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for researchers and developers in academia and industry from the fields of speech processing and speech recognition, specifically Keyword Spotting.
The contributions to this volume are drawn from the interdisciplinary research c- ried out within the Sonderforschungsbereich (SFB 378), a special long-term funding scheme of the German National Science Foundation (DFG). Sonderforschungsbe- ich 378 was situated at Saarland University, with colleagues from arti?cial intel- gence, computational linguistics, computer science, philosophy, psychology - and in its ?nal phases - cognitive neuroscience and psycholinguistics. The funding covered a period of 12 years, which was split into four phases of 3 years each, ending in December of 2007. Every sub-period culminated in an intensive reviewing process, comprising written reports as well as on-site p- sentations and demonstrations to the external reviewers. We are most grateful to these reviewers for their extensive support and critical feedback; they contributed 1 their time and labor freely to the DFG, the independent and self-organized ins- tution of German scientists. The ?nal evaluation of the DFG reviewers judged the overall performance and the actual work with the highest possible mark, i.e. "excellent".
In opposition to the classical set theory of natural language, Novak's highly original monograph offers a theory based on alternative and fuzzy sets. This new approach is firmly grounded in semantics and pragmatics, and accounts for the vagueness inherent in natural language-filling a large gap in our current knowledge. The theory will foster fruitful debate among researchers in linguistics and artificial intellegence.
This two-volume set, consisting of LNCS 7181 and LNCS 7182, constitutes the thoroughly refereed proceedings of the 13th International Conference on Computer Linguistics and Intelligent Processing, held in New Delhi, India, in March 2012. The total of 92 full papers were carefully reviewed and selected for inclusion in the proceedings. The contents have been ordered according to the following topical sections: NLP system architecture; lexical resources; morphology and syntax; word sense disambiguation and named entity recognition; semantics and discourse; sentiment analysis, opinion mining, and emotions; natural language generation; machine translation and multilingualism; text categorization and clustering; information extraction and text mining; information retrieval and question answering; document summarization; and applications. |
You may like...
Numerical Time-Dependent Partial…
Moysey Brio, Gary M. Webb, …
Hardcover
|