![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
It is well-known that phonemes have different acoustic realizations depending on the context. Thus, for example, the phoneme /t! is typically realized with a heavily aspirated strong burst at the beginning of a syllable as in the word Tom, but without a burst at the end of a syllable in a word like cat. Variation such as this is often considered to be problematic for speech recogni tion: (1) "In most systems for sentence recognition, such modifications must be viewed as a kind of 'noise' that makes it more difficult to hypothesize lexical candidates given an in put phonetic transcription. To see that this must be the case, we note that each phonological rule [in a certain example] results in irreversible ambiguity-the phonological rule does not have a unique inverse that could be used to recover the underlying phonemic representation for a lexical item. For example, . . . schwa vowels could be the first vowel in a word like 'about' or the surface realization of almost any English vowel appearing in a sufficiently destressed word. The tongue flap [(] could have come from a /t! or a /d/. " [65, pp. 548-549] This view of allophonic variation is representative of much of the speech recognition literature, especially during the late 1970's. One can find similar statements by Cole and Jakimik [22] and by Jelinek [50].
This book provides a precise and thorough description of the meaning and use of spatial expressions, using both a linguistics and an artificial intelligence perspective, and also an enlightening discussion of computer models of comprehension and production in the spatial domain. The author proposes a theoretical framework that explains many previously overlooked or misunderstood irregularities. The use of prepositions reveals underlying schematisations and idealisations of the spatial world, which, for the most part, echo representational structures necessary for human action (movement and manipulation). Because spatial cognition seems to provide a key to understanding much of the cognitive system, including language, the book addresses one of the most basic questions confronting cognitive science and artificial intelligence, and brings fresh and original insights to it.
Authors and Participants xi I Pragmatic Aspects 1 1. Some pragmatic decision criteria in generation 3 EduardH. Hovy 2. How to appear to be conforming to the 'maxims' even if you prefer to violate them 19 Antlwny Jameson 43 3. Contextual effects on responses to misconceptions Kathleen F. McCoy 4. Generating understandable explanatory sentences 55 Domenico Parisi & Donatella Ferrante 5. Toward a plan-based theory of referring actions 63 Douglas E. Appelt Generating referring expressions and pointing gestures 71 6. Norben Reithinger II Generation of Connected Discourse 83 7. Rhetorical Structure Theory: description and construction of text structures 85 William C. Mann & Sandra A. Tlwmpson 8. Discourse strategies for describing complex physical objects 97 Cecile L. Paris & Kathleen R. McKeown 9. Strategies for generating coherent descriptions of object movements in street scenes 117 Hans-Joachim Novak 133 10. The automated news agency: SEMTEX - a text generator for German Dietmar ROsner 149 11. A connectionist approach to the generation of abstracts KOiti Hasida, Shun Ishizald & Hitoshi Isahara III Generator Design 157 159 12. Factors contributing to efficiency in natural language generation DavidD. McDonald, Marie M. Vaughan & James D. Pustejovsky 183 13. Reviewing as a component of the text generation process Masoud Yazdani A French and English syntactic component for generation 191 14. Laurence Danlos KING: a knowledge-intensive natural language generator 219 15. Paul S. Jacobs vii 231 IV Grammars and Grammatical Formalisms 233 16. The relevance of Tree Adjoining Grammar to generation Aravind K.
"Emotion Recognition Using Speech Features" provides coverage of emotion-specific features present in speech. The author also discusses suitable models for capturing emotion-specific information for distinguishing different emotions. The content of this book is important for designing and developing natural and sophisticated speech systems. In this Brief, Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about exploiting multiple evidences derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Features includes discussion of: * Global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; * Exploiting complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance; * Proposed multi-stage and hybrid models for improving the emotion recognition performance. This brief is for researchers working in areas related to speech-based products such as mobile phone manufacturing companies, automobile companies, and entertainment products as well as researchers involved in basic and applied speech processing research.
Parsing with Principles and Classes of Information presents a parser based on current principle-based linguistic theories for English. It argues that differences in the kind of information being computed, whether lexical, structural or syntactic, play a crucial role in the mapping from grammatical theory to parsing algorithms. The direct encoding of homogeneous classes of information has computational and cognitive advantages, which are discussed in detail. Phrase structure is built by using a fast algorithm and compact reference tables. A quantified comparison of different compilation methods shows that lexical and structural information are most compactly represented by separate tables. This finding is reconciled to evidence on the resolution of lexical ambiguity, as an approach to the modularization of information. The same design is applied to the efficient computation of long- distance dependencies. Incremental parsing using bottom-up tabular algorithms is discussed in detail. Finally, locality restrictions are calculated by a parametric algorithm. Students of linguistics, parsing and psycholinguistics will find this book a useful resource on issues related to the implementation of current linguistic theories, using computational and cognitive plausible algorithms.
This book constitutes the proceedings of the First International Conference on Knowledge - Ontology - Theory (KONT 2007) held in Novosibirsk, Russia, in September 2007 and the First International Conference on Knowledge Processing in Practice (KPP 2007) held in Darmstadt, Germany, in September 2007. The 21 revised full papers were carefully reviewed and selected from numerous submissions and cover four main focus areas: applications of conceptual structures; concept based software; ontologies as conceptual structures; and data analysis.
This book presents computational mechanisms for solving common language interpretation problems including many cases of reference resolution, word sense disambiguation, and the interpretation of relationships implicit in modifiers. The proposed memory and context mechanisms provide the means for representing and applying information about the semantic relationships between entities imposed by the cultural context. The effects of different 'context factors', derived from multiple sources, are combined for disambiguation and for limiting memory search; the factors having been created and manipulated gradually during discourse processing.
That philosophical themes could be studied in an exact manner by logical meanS was a delightful discovery to make. Until then, the only outlet for a philosophical interest known to me was the production of poetry or essays. These means of expression remain inconclusive, however, with a tendency towards profuseness. The logical discipline provides so me intellectual backbone, without excluding the literary modes. A master's thesis by Erik Krabbe introduced me to the subject of tense logic. The doctoral dissertation of Paul N eedham awaked me (as so many others) from my dogmatic slumbers concerning the latter's mono poly on the logical study of Time. Finally, a set of lecture notes by Frank Veltman showed me how classical model theory is just as relevant to that study as more exotic intensional techniques. Of the authors whose work inspired me most, I would mention Arthur Prior, for his irresistible blend of logic and philosophy, Krister Segerberg, for his technical opening up of a systematic theory, and Hans Kamp, for his mastery of all these things at once. Many colleagues have made helpful comments on the two previous versions of this text. I would like to thank especially my students Ed Brinksma, Jan van Eyck and Wilfried Meyer-Viol for their logical and cultural criticism. The drawings were contributed by the versatile Bauke Mulder. Finally, Professor H intikka's kind appreciation provided the stimulus to write this book."
The general markup language XML has played an outstanding role in the mul- ple ways of processing electronic documents, XML being used either in the design of interface structures or as a formal framework for the representation of structure or content-related properties of documents. This book in its 13 chapters discusses aspects of XML-based linguistic information modeling combining: methodological issues, especially with respect to text-related information modeling, applicati- oriented research and issues of formal foundations. The contributions in this book are based on current research in Text Technology, Computational Linguistics and in the international domain of evolving standards for language resources. Rec- rent themes in this book are markup languages, explored from different points of view, and topics of text-related information modeling. These topics have been core areas of the research Unit "Text-technological Information Modeling" (www. te- technology. de) funded from 2002 to 2009 by the German Research Foundation (DFG). Positions developed in this book could also bene t from the presentations and discussion at the conference "Modelling Linguistic Information Resources" at the Center for Interdisciplinary Research (Zentrum fur .. interdisziplinare .. Forschung, ZiF) at Bielefeld, a center for advanced studies known for its international and interdisciplinary meetings and research. The editors would like to thank the DFG and ZiF for their nancial support, the publisher, the series editors, the reviewers and those people that helped to prepare the manuscript, especially Carolin Kram, Nils Diewald, Jens Stegmann and Peter M. Fischer and last but not least, all of the authors.
This book constitutes the proceedings of the Third International Conference of the CLEF Initiative, CLEF 2012, held in Rome, Italy, in September 2012. The 14 papers and 3 poster abstracts presented were carefully reviewed and selected for inclusion in this volume. Furthermore, the books contains 2 keynote papers. The papers are organized in topical sections named: benchmarking and evaluation initiatives; information access; and evaluation methodologies and infrastructure.
This book presents the state of the art in the areas of ontology evolution and knowledge-driven multimedia information extraction, placing an emphasis on how the two can be combined to bridge the semantic gap. This was also the goal of the EC-sponsored BOEMIE (Bootstrapping Ontology Evolution with Multimedia Information Extraction) project, to which the authors of this book have all contributed. The book addresses researchers and practitioners in the field of computer science and more specifically in knowledge representation and management, ontology evolution, and information extraction from multimedia data. It may also constitute an excellent guide to students attending courses within a computer science study program, addressing information processing and extraction from any type of media (text, images, and video). Among other things, the book gives concrete examples of how several of the methods discussed can be applied to athletics (track and field) events.
This book is a collection of papers using samples of real language data (corpora) to explore variation in the use of English. This collection celebrates the achievements of Toshio Saito, a pioneer in corpus linguistics within Japan and founder of the Japan Association for English Corpus Studies (JAECS). The main aims throughout the collection are to present practical solutions for methodological and interpretational problems common in such research, and to make the research methods and issues as accessible as possible, to educate and inspire future researchers. Together, the papers represent many different dimensions of variation, including: differences in (frequency of) use under different linguistic conditions; differences between styles or registers of use; change over time; differences between regional varieties; differences between social groups; and differences in use by one individual on different occasions. The papers are grouped into four sections: studies considering methodological problems in the use of real language samples; studies describing features of language usage in different linguistic environments in modern English; studies following change over time; and case studies illustrating variation in usage for different purposes, or by different groups or individuals, in society.
This volume is a collection of original contributions from outstanding scholars in linguistics, philosophy and computational linguistics exploring the relation between word meaning and human linguistic creativity. The papers present different aspects surrounding the question of what is word meaning, a problem that has been the centre of heated debate in all those disciplines that directly or indirectly are concerned with the study of language and of human cognition. The discussions are centred around a view of the mental lexicon, as outlined in the Generative Lexicon theory (Pustejovsky, 1995), which proposes a unified model for defining word meaning. The individual contributors present their evidence for a generative approach as well as critical perspectives, which provides for a volume where word meaning is not viewed only from a particular angle or from a particular concern, but from a wide variety of topics, each introduced and explained by the editors.
Text classification is becoming a crucial task to analysts in different areas. In the last few decades, the production of textual documents in digital form has increased exponentially. Their applications range from web pages to scientific documents, including emails, news and books. Despite the widespread use of digital texts, handling them is inherently difficult - the large amount of data necessary to represent them and the subjectivity of classification complicate matters. This book gives a concise view on how to use kernel approaches for inductive inference in large scale text classification; it presents a series of new techniques to enhance, scale and distribute text classification tasks. It is not intended to be a comprehensive survey of the state-of-the-art of the whole field of text classification. Its purpose is less ambitious and more practical: to explain and illustrate some of the important methods used in this field, in particular kernel approaches and techniques.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This book constitutes the refereed proceedings of the 15th and 16th International Conference on Formal Grammar 2010 and 2011, collocated with the European Summer School in Logic, Language and Information in July 2010/2011. The 19 revised full papers were carefully reviewed and selected from a total of 50 submissions. The papers papers deal with the following topics: formal and computational phonology, morphology, syntax, semantics and pragmatics; model-theoretic and proof-theoretic methods in linguistics; logical aspects of linguistic structure; constraint-based and resource-sensitive approaches to grammar; learnability of formal grammar; integration of stochastic and symbolic models of grammar; foundational, methodological and architectural issues in grammar; mathematical foundations of statistical approaches to linguistic analysis.
This book constitutes the refereed proceedings of the 4th Language and Technology Conference: Challenges for Computer Science and Linguistics, LTC 2009, held in Poznan, Poland, in November 2009. The 52 revised and in many cases substantially extended papers presented in this volume were carefully reviewed and selected from 103 submissions. The contributions are organized in topical sections on speech processing, computational morphology/lexicography, parsing, computational semantics, dialogue modeling and processing, digital language resources, WordNet, document processing, information processing, and machine translation.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This two-volume set, consisting of LNCS 6608 and LNCS 6609, constitutes the thoroughly refereed proceedings of the 12th International Conference on Computer Linguistics and Intelligent Processing, held in Tokyo, Japan, in February 2011. The 74 full papers, presented together with 4 invited papers, were carefully reviewed and selected from 298 submissions. The contents have been ordered according to the following topical sections: lexical resources; syntax and parsing; part-of-speech tagging and morphology; word sense disambiguation; semantics and discourse; opinion mining and sentiment detection; text generation; machine translation and multilingualism; information extraction and information retrieval; text categorization and classification; summarization and recognizing textual entailment; authoring aid, error correction, and style analysis; and speech recognition and generation.
For some time already, a discourse within the field of Translation Studies has increasingly focused on the translator, his/her translation properties and mental processes resulting from their application. Recent years and advances in technology have opened up many possibilities of gaining a deeper insight into these processes. This publication presents the theoretical foundations, the results of scientific experiments, and a broad range of questions to be asked and answered by eye-tracking supported translation studies. The texts have been arranged into two thematic parts. The first part consists of texts dedicated to the theoretical foundations of Translation Studies-oriented eye-tracking research. The second part includes texts discussing the results of the experiments that were carried out.
Geometric Data Analysis (GDA) is the name suggested by P. Suppes (Stanford University) to designate the approach to Multivariate Statistics initiated by Benzecri as Correspondence Analysis, an approach that has become more and more used and appreciated over the years. This book presents the full formalization of GDA in terms of linear algebra - the most original and far-reaching consequential feature of the approach - and shows also how to integrate the standard statistical tools such as Analysis of Variance, including Bayesian methods. Chapter 9, Research Case Studies, is nearly a book in itself; it presents the methodology in action on three extensive applications, one for medicine, one from political science, and one from education (data borrowed from the Stanford computer-based Educational Program for Gifted Youth ). Thus the readership of the book concerns both mathematicians interested in the applications of mathematics, and researchers willing to master an exceptionally powerful approach of statistical data analysis.
Entropy Guided Transformation Learning: Algorithms and Applications (ETL) presents a machine learning algorithm for classification tasks. ETL generalizes Transformation Based Learning (TBL) by solving the TBL bottleneck: the construction of good template sets. ETL automatically generates templates using Decision Tree decomposition. The authors describe ETL Committee, an ensemble method that uses ETL as the base learner. Experimental results show that ETL Committee improves the effectiveness of ETL classifiers. The application of ETL is presented to four Natural Language Processing (NLP) tasks: part-of-speech tagging, phrase chunking, named entity recognition and semantic role labeling. Extensive experimental results demonstrate that ETL is an effective way to learn accurate transformation rules, and shows better results than TBL with handcrafted templates for the four tasks. By avoiding the use of handcrafted templates, ETL enables the use of transformation rules to a greater range of tasks. Suitable for both advanced undergraduate and graduate courses, Entropy Guided Transformation Learning: Algorithms and Applications provides a comprehensive introduction to ETL and its NLP applications.
The subject of Time has a wide intellectual appeal across different dis ciplines. This has shown in the variety of reactions received from readers of the first edition of the present Book. Many have reacted to issues raised in its philosophical discussions, while some have even solved a number of the open technical questions raised in the logical elaboration of the latter. These results will be recorded below, at a more convenient place. In the seven years after the first publication, there have been some noticeable newer developments in the logical study of Time and temporal expressions. As far as Temporal Logic proper is concerned, it seems fair to say that these amount to an increase in coverage and sophistication, rather than further break-through innovation. In fact, perhaps the most significant sources of new activity have been the applied areas of Linguistics and Computer Science (including Artificial Intelligence), where many intriguing new ideas have appeared presenting further challenges to temporal logic. Now, since this Book has a rather tight composition, it would have been difficult to interpolate this new material without endangering intelligibility." |
You may like...
Spelling and Writing Words - Theoretical…
Cyril Perret, Thierry Olive
Hardcover
R3,400
Discovery Miles 34 000
Trends in E-Tools and Resources for…
Gloria Corpas Pastor, Isabel Duran Munoz
Hardcover
R3,694
Discovery Miles 36 940
Foundation Models for Natural Language…
Gerhard PaaĂź, Sven Giesselbach
Hardcover
R884
Discovery Miles 8 840
The Art and Science of Machine…
Walker H. Land Jr., J. David Schaffer
Hardcover
R4,039
Discovery Miles 40 390
Linguistic Inquiries into Donald…
Ulrike Schneider, Matthias Eitelmann
Hardcover
R4,314
Discovery Miles 43 140
Corpus Stylistics in Heart of Darkness…
Lorenzo Mastropierro
Hardcover
R4,635
Discovery Miles 46 350
From Data to Evidence in English…
Carla Suhr, Terttu Nevalainen, …
Hardcover
R4,797
Discovery Miles 47 970
Artificial Intelligence for Healthcare…
Boris Galitsky, Saveli Goldberg
Paperback
R2,991
Discovery Miles 29 910
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,569
Discovery Miles 45 690
|