![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
This book constitutes the refereed proceedings of the 17th and 18th International Conference on Formal Grammar 2012 and 2013, collocated with the European Summer School in Logic, Language and Information in August 2012/2013. The 18 revised full papers were carefully reviewed and selected from a total of 27 submissions. The focus of papers are as follows: formal and computational phonology, morphology, syntax, semantics and pragmatics; model-theoretic and proof-theoretic methods in linguistics; logical aspects of linguistic structure; constraint-based and resource-sensitive approaches to grammar; learnability of formal grammar; integration of stochastic and symbolic models of grammar; foundational, methodological and architectural issues in grammar and linguistics, and mathematical foundations of statistical approaches to linguistic analysis.
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
The Generalized LR parsing algorithm (some call it "Tomita's algorithm") was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems based on existing parsing methods. Their parsing speed, however, always bothered me. I sometimes wondered whether it was ever possible to build a natural language parser that could parse reasonably long sentences in a reasonable time without help from large mainframe machines. At the same time, I was always amazed by the speed of programming language compilers, because they can parse very long sentences (i.e., programs) very quickly even on workstations. There are two reasons. First, programming languages are considerably simpler than natural languages. And secondly, they have very efficient parsing methods, most notably LR. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. So, the key to the LR efficiency is the grammar precompilation; something that had never been tried for natural languages in 1985. Of course, there was a good reason why LR had never been applied for natural languages; it was simply impossible. If your context-free grammar is sufficiently more complex than programming languages, its LR parsing table will have multiple actions, and deterministic parsing will be no longer possible.
In opposition to the classical set theory of natural language, Novak's highly original monograph offers a theory based on alternative and fuzzy sets. This new approach is firmly grounded in semantics and pragmatics, and accounts for the vagueness inherent in natural language-filling a large gap in our current knowledge. The theory will foster fruitful debate among researchers in linguistics and artificial intellegence.
"Cognitive and Computational Strategies for Word Sense
Disambiguation" examines cognitive strategies by humans and
computational strategies by machines, for WSD in parallel.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This book introduces an approach that can be used to ground a variety of intelligent systems, ranging from simple fact based systems to highly sophisticated reasoning systems. As the popularity of AI related fields has grown over the last decade, the number of persons interested in building intelligent systems has increased exponentially. Some of these people are highly skilled and experienced in the use of Al techniques, but many lack that kind of expertise. Much of the literature that might otherwise interest those in the latter category is not appreci ated by them because the material is too technical, often needlessly so. The so called logicists see logic as a primary tool and favor a formal approach to Al, whereas others are more content to rely on informal methods. This polarity has resulted in different styles of writing and reporting, and people entering the field from other disciplines often find themselves hard pressed to keep abreast of current differences in style. This book attempts to strike a balance between these approaches by covering points from both technical and nontechnical perspectives and by doing so in a way that is designed to hold the interest of readers of each persuasion. During recent years, a somewhat overwhelming number of books that present general overviews of Al related subjects have been placed on the market . These books serve an important function by providing researchers and others entering the field with progress reports and new developments.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This book constitutes the proceedings of the Third International Conference of the CLEF Initiative, CLEF 2012, held in Rome, Italy, in September 2012. The 14 papers and 3 poster abstracts presented were carefully reviewed and selected for inclusion in this volume. Furthermore, the books contains 2 keynote papers. The papers are organized in topical sections named: benchmarking and evaluation initiatives; information access; and evaluation methodologies and infrastructure.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
Audio Signal Processing for Next-Generation Multimedia Communication Systems presents cutting-edge digital signal processing theory and implementation techniques for problems including speech acquisition and enhancement using microphone arrays, new adaptive filtering algorithms, multichannel acoustic echo cancellation, sound source tracking and separation, audio coding, and realistic sound stage reproduction. This book's focus is almost exclusively on the processing, transmission, and presentation of audio and acoustic signals in multimedia communications for telecollaboration where immersive acoustics will play a great role in the near future.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
"Predicting Prosody from Text for Text-to-Speech Synthesis"covers thespecific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing."
Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the refereed proceedings of the 7th International Conference on Logical Aspects of Computational Linguistics, LACL 2012, held in Nantes, France, in July 2012. The 15 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 24 submissions. The papers are organized in topical sections on logical foundation of syntactic formalisms, logics for semantics of lexical items, sentences, discourse and dialog, applications of these models to natural language processing, type theoretic, proof theoretic, model theoretic and other logically based formal methods for describing natural language syntax, semantics and pragmatics, as well as the implementation of natural language processing software relying on such methods.
"Phonetic Search Methods for Large Databases" focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors' own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for researchers and developers in academia and industry from the fields of speech processing and speech recognition, specifically Keyword Spotting.
The contributions to this volume are drawn from the interdisciplinary research c- ried out within the Sonderforschungsbereich (SFB 378), a special long-term funding scheme of the German National Science Foundation (DFG). Sonderforschungsbe- ich 378 was situated at Saarland University, with colleagues from arti?cial intel- gence, computational linguistics, computer science, philosophy, psychology - and in its ?nal phases - cognitive neuroscience and psycholinguistics. The funding covered a period of 12 years, which was split into four phases of 3 years each, ending in December of 2007. Every sub-period culminated in an intensive reviewing process, comprising written reports as well as on-site p- sentations and demonstrations to the external reviewers. We are most grateful to these reviewers for their extensive support and critical feedback; they contributed 1 their time and labor freely to the DFG, the independent and self-organized ins- tution of German scientists. The ?nal evaluation of the DFG reviewers judged the overall performance and the actual work with the highest possible mark, i.e. "excellent".
This book grew out of the Fourth Conference on Computers and the Writing Process, held at the University of Sussex in March 1991. The conference brought together a wide variety of people interested in most aspects of computers and the writing process including, computers and writing education, computer supported fiction, computers and technical writing, evaluation of computer-based writing, and hypertext. Fifteen papers were selected from the twenty-five delivered at the conference. The authors were asked to develop them into articles, incorporating any insights they had gained from their conference presentations. This book offers a survey of the wide area of Computers and Writing, and describes current work in the design and use of computer-based tools for writing. University of Sussex M.S. October, 1991 Note from Publisher This collection of articles is being published simultaneously as a special issue, Volume 21(1-3), of Instructional Science - An International Journal of Learning and Cognition. Instructional Science 21: 1-4 (1992) 1 (c) Kluwer Academic Publishers, Dordrecht Introduction MIKE SHARPLES School of Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton BNl 9QH, United Kingdom.
Entropy Guided Transformation Learning: Algorithms and Applications (ETL) presents a machine learning algorithm for classification tasks. ETL generalizes Transformation Based Learning (TBL) by solving the TBL bottleneck: the construction of good template sets. ETL automatically generates templates using Decision Tree decomposition. The authors describe ETL Committee, an ensemble method that uses ETL as the base learner. Experimental results show that ETL Committee improves the effectiveness of ETL classifiers. The application of ETL is presented to four Natural Language Processing (NLP) tasks: part-of-speech tagging, phrase chunking, named entity recognition and semantic role labeling. Extensive experimental results demonstrate that ETL is an effective way to learn accurate transformation rules, and shows better results than TBL with handcrafted templates for the four tasks. By avoiding the use of handcrafted templates, ETL enables the use of transformation rules to a greater range of tasks. Suitable for both advanced undergraduate and graduate courses, Entropy Guided Transformation Learning: Algorithms and Applications provides a comprehensive introduction to ETL and its NLP applications.
This manual contains an up-to-date description of the existing anthologies (with a linguistic focus) and corpora that have so far been compiled for the different Romance languages. This description takes into account both the standard languages and a selection of well-attested diatopic and diastratic varieties as well as Romance-based Creoles. Representative texts and detailed commentaries are provided for all the languages and varieties discussed.
In knowledge-based natural language generation, issues of formal knowledge representation meet with the linguistic problems of choosing the most appropriate verbalization in a particular situation of utterance. Lexical Semantics and Knowledge Representation in Multilingual Text Generation presents a new approach to systematically linking the realms of lexical semantics and knowledge represented in a description logic. For language generation from such abstract representations, lexicalization is taken as the central step: when choosing words that cover the various parts of the content representation, the principal decisions on conveying the intended meaning are made. A preference mechanism is used to construct the utterance that is best tailored to parameters representing the context. Lexical Semantics and Knowledge Representation in Multilingual Text Generation develops the means for systematically deriving a set of paraphrases from the same underlying representation with the emphasis on events and verb meaning. Furthermore, the same mapping mechanism is used to achieve multilingual generation: English and German output are produced in parallel, on the basis of an adequate division between language-neutral and language-specific (lexical and grammatical) knowledge. Lexical Semantics and Knowledge Representation in Multilingual Text Generation provides detailed insights into designing the representations and organizing the generation process. Readers with a background in artificial intelligence, cognitive science, knowledge representation, linguistics, or natural language processing will find a model of language production that can be adapted to a variety of purposes. |
![]() ![]() You may like...
Nonlinear Approaches in Engineering…
Reza N. Jazar, Liming Dai
Hardcover
R4,682
Discovery Miles 46 820
Structure and Reactivity of Metals in…
Joaquin Perez Pariente, Manuel Sanchez-Sanchez
Hardcover
R8,319
Discovery Miles 83 190
Advances in the Theory of Atomic and…
Piotr Piecuch, Jean Maruani, …
Hardcover
R5,673
Discovery Miles 56 730
New Approaches in Intelligent Image…
Roumen Kountchev, Kazumi Nakamatsu
Hardcover
Automation, Communication and…
Sabina Jeschke, Ingrid Isenhardt, …
Hardcover
R3,089
Discovery Miles 30 890
High-Energy Chemistry and Processing in…
Yoshie Ishikawa, Takahiro Nakamura, …
Hardcover
R4,596
Discovery Miles 45 960
|