![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
"Predicting Prosody from Text for Text-to-Speech Synthesis"covers thespecific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing."
This two-volume set, consisting of LNCS 6608 and LNCS 6609, constitutes the thoroughly refereed proceedings of the 12th International Conference on Computer Linguistics and Intelligent Processing, held in Tokyo, Japan, in February 2011. The 74 full papers, presented together with 4 invited papers, were carefully reviewed and selected from 298 submissions. The contents have been ordered according to the following topical sections: lexical resources; syntax and parsing; part-of-speech tagging and morphology; word sense disambiguation; semantics and discourse; opinion mining and sentiment detection; text generation; machine translation and multilingualism; information extraction and information retrieval; text categorization and classification; summarization and recognizing textual entailment; authoring aid, error correction, and style analysis; and speech recognition and generation.
Geometric Data Analysis (GDA) is the name suggested by P. Suppes (Stanford University) to designate the approach to Multivariate Statistics initiated by Benzecri as Correspondence Analysis, an approach that has become more and more used and appreciated over the years. This book presents the full formalization of GDA in terms of linear algebra - the most original and far-reaching consequential feature of the approach - and shows also how to integrate the standard statistical tools such as Analysis of Variance, including Bayesian methods. Chapter 9, Research Case Studies, is nearly a book in itself; it presents the methodology in action on three extensive applications, one for medicine, one from political science, and one from education (data borrowed from the Stanford computer-based Educational Program for Gifted Youth ). Thus the readership of the book concerns both mathematicians interested in the applications of mathematics, and researchers willing to master an exceptionally powerful approach of statistical data analysis.
This book is a revised version of my doctoral thesis which was submitted in April 1993. The main extension is a chapter on evaluation of the system de scribed in Chapter 8 as this is clearly an issue which was not treated in the original version. This required the collection of data, the development of a concept for diagnostic evaluation of linguistic word recognition systems and, of course, the actual evaluation of the system itself. The revisions made primarily concern the presentation of the latest version of the SILPA system described in an additional Subsection 8. 3, the development environment for SILPA in Sec tion 8. 4, the diagnostic evaluation of the system as an additional Chapter 9. Some updates are included in the discussion of phonology and computation in Chapter 2 and finite state techniques in computational phonology in Chapter 3. The thesis was designed primarily as a contribution to the area of compu tational phonology. However, it addresses issues which are relevant within the disciplines of general linguistics, computational linguistics and, in particular, speech technology, in providing a detailed declarative, computationally inter preted linguistic model for application in spoken language processing. Time Map Phonology is a novel, constraint-based approach based on a two-stage temporal interpretation of phonological categories as events."
The subject of Time has a wide intellectual appeal across different dis ciplines. This has shown in the variety of reactions received from readers of the first edition of the present Book. Many have reacted to issues raised in its philosophical discussions, while some have even solved a number of the open technical questions raised in the logical elaboration of the latter. These results will be recorded below, at a more convenient place. In the seven years after the first publication, there have been some noticeable newer developments in the logical study of Time and temporal expressions. As far as Temporal Logic proper is concerned, it seems fair to say that these amount to an increase in coverage and sophistication, rather than further break-through innovation. In fact, perhaps the most significant sources of new activity have been the applied areas of Linguistics and Computer Science (including Artificial Intelligence), where many intriguing new ideas have appeared presenting further challenges to temporal logic. Now, since this Book has a rather tight composition, it would have been difficult to interpolate this new material without endangering intelligibility."
Franciska de Jong and Jan Landsbergen Jan Landsbergen 2 A compositional definition of the translation relation Jan Odijk 3 M-grammars Jan Landsbergen and Franciska de Jong 4 The translation process Lisette Appelo 5 The Rosetta characteristics Joep Rous and Harm Smit 6 Morphology Jan Odijk, Harm Smit and Petra de Wit 7 Dictionaries Jan Odijk 8 Syntactic rules Modular and controlled Lisette Appelo 9 M-grammars Compositionality and syntactic Jan Odijk 10 generalisations Jan Odijk and Elena Pinillos Bartolome 11 Incorporating theoretical linguistic insights Lisette Appelo 12 Divergences between languages Lisette Appelo 13 Categorial divergences Translation of temporal Lisette Appelo 14 expressions Andre Schenk 15 Idioms and complex predicates Lisette Appelo and Elly van Munster 16 Scope and negation Rene Leermakers and Jan Landsbergen 17 The formal definition of M-grammars Rene Leermakers and Joep Rous 18 An attribute grammar view Theo Janssen 19 An algebraic view Rene Leermakers 20 Software engineering aspects Jan Landsbergen 21 Conclusion Contents 1 1 Introduction 1. 1 Knowledge needed for translation . . . . . . . . . . . 2 1. 1. 1 Knowledge of language and world knowledge 2 1. 1. 2 Formalisation. . . . . . . . . . . . . . . . . 4 1. 1. 3 The underestimation of linguistic problems . 5 1. 1. 4 The notion of possible translation . 5 1. 2 Applications. . . . . . . . . . . 7 1. 3 A linguistic perspective on MT 9 1. 3. 1 Scope of the project 9 1. 3. 2 Scope of the book 11 1. 4 Organisation of the book . .
Computational Models of Mixed-Initiative Interaction brings together research that spans several disciplines related to artificial intelligence, including natural language processing, information retrieval, machine learning, planning, and computer-aided instruction, to account for the role that mixed initiative plays in the design of intelligent systems. The ten contributions address the single issue of how control of an interaction should be managed when abilities needed to solve a problem are distributed among collaborating agents. Managing control of an interaction among humans and computers to gather and assemble knowledge and expertise is a major challenge that must be met to develop machines that effectively collaborate with humans. This is the first collection to specifically address this issue.
1. Metaphors and Logic Metaphors are among the most vigorous offspring of the creative mind; but their vitality springs from the fact that they are logical organisms in the ecology of l- guage. I aim to use logical techniques to analyze the meanings of metaphors. My goal here is to show how contemporary formal semantics can be extended to handle metaphorical utterances. What distinguishes this work is that it focuses intensely on the logical aspects of metaphors. I stress the role of logic in the generation and int- pretation of metaphors. While I don't presuppose any formal training in logic, some familiarity with philosophical logic (the propositional calculus and the predicate c- culus) is helpful. Since my theory makes great use of the notion of structure, I refer to it as the structural theory of m etaphor (STM). STM is a semant ic theory of m etaphor : if STM is correct, then metaphors are cognitively meaningful and are n- trivially logically linked with truth. I aim to extend possible worlds semantics to handle metaphors. I'll argue that some sentences in natural languages like English have multiple meanings: "Juliet is the sun" has (at least) two meanings: the literal meaning "(Juliet is the sunkIT" and the metaphorical meaning "(Juliet is the sun)MET". Each meaning is a function from (possible) worlds to truth-values. I deny that these functions are identical; I deny that the metaphorical function is necessarily false or necessarily true.
Parsing Efficiency is crucial when building practical natural language systems. 'Ibis is especially the case for interactive systems such as natural language database access, interfaces to expert systems and interactive machine translation. Despite its importance, parsing efficiency has received little attention in the area of natural language processing. In the areas of compiler design and theoretical computer science, on the other hand, parsing algorithms 3 have been evaluated primarily in terms of the theoretical worst case analysis (e.g. lXn", and very few practical comparisons have been made. This book introduces a context-free parsing algorithm that parses natural language more efficiently than any other existing parsing algorithms in practice. Its feasibility for use in practical systems is being proven in its application to Japanese language interface at Carnegie Group Inc., and to the continuous speech recognition project at Carnegie-Mellon University. This work was done while I was pursuing a Ph.D degree at Carnegie-Mellon University. My advisers, Herb Simon and Jaime Carbonell, deserve many thanks for their unfailing support, advice and encouragement during my graduate studies. I would like to thank Phil Hayes and Ralph Grishman for their helpful comments and criticism that in many ways improved the quality of this book. I wish also to thank Steven Brooks for insightful comments on theoretical aspects of the book (chapter 4, appendices A, B and C), and Rich Thomason for improving the linguistic part of tile book (the very beginning of section 1.1).
This volume is a selection of papers presented at a workshop entitled Predicative Forms in Natural Language and in Lexical Knowledge Bases organized in Toulouse in August 1996. A predicate is a named relation that exists among one or more arguments. In natural language, predicates are realized as verbs, prepositions, nouns and adjectives, to cite the most frequent ones. Research on the identification, organization, and semantic representa tion of predicates in artificial intelligence and in language processing is a very active research field. The emergence of new paradigms in theoretical language processing, the definition of new problems and the important evol ution of applications have, in fact, stimulated much interest and debate on the role and nature of predicates in naturallangage. From a broad theoret ical perspective, the notion of predicate is central to research on the syntax semantics interface, the generative lexicon, the definition of ontology-based semantic representations, and the formation of verb semantic classes. From a computational perspective, the notion of predicate plays a cent ral role in a number of applications including the design of lexical knowledge bases, the development of automatic indexing systems for the extraction of structured semantic representations, and the creation of interlingual forms in machine translation."
Most of the books about computational (lexical) semantic lexicons deal with the depth (or content) aspect of lexicons, ignoring the breadth (or coverage) aspect. This book presents a first attempt in the community to address both issues: content and coverage of computational semantic lexicons, in a thorough manner. Moreover, it addresses issues which have not yet been tackled in implemented systems such as the application time of lexical rules. Lexical rules and lexical underspecification are also contrasted in implemented systems. The main approaches in the field of computational (lexical) semantics are represented in the present book (including Wordnet, CyC, Mikrokosmos, Generative Lexicon). This book embraces several fields (and subfields) as different as: linguistics (theoretical, computational, semantics, pragmatics), psycholinguistics, cognitive science, computer science, artificial intelligence, knowledge representation, statistics and natural language processing. The book also constitutes a very good introduction to the state of the art in computational semantic lexicons of the late 1990s.
One of the aims of Natural Language Processing is to facilitate .the use of computers by allowing their users to communicate in natural language. There are two important aspects to person-machine communication: understanding and generating. While natural language understanding has been a major focus of research, natural language generation is a relatively new and increasingly active field of research. This book presents an overview of the state of the art in natural language generation, describing both new results and directions for new research. The principal emphasis of natural language generation is not only to facili tate the use of computers but also to develop a computational theory of human language ability. In doing so, it is a tool for extending, clarifying and verifying theories that have been put forth in linguistics, psychology and sociology about how people communicate. A natural language generator will typically have access to a large body of knowledge from which to select information to present to users as well as numer of expressing it. Generating a text can thus be seen as a problem of ous ways decision-making under multiple constraints: constraints from the propositional knowledge at hand, from the linguistic tools available, from the communicative goals and intentions to be achieved, from the audience the text is aimed at and from the situation and past discourse. Researchers in generation try to identify the factors involved in this process and determine how best to represent the factors and their dependencies."
It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then. The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as well as to consumers of logic in many applied areas. The main logic article in the Encyclopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good! The first edition was the second handbook published for the logic com- nity. It followed the North Holland one volume Handbook of Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook of Philosophical Logic, published 1983-1989 came at a fortunate temporal junction at the evolution of logic. This was the time when logic was gaining ground in computer science and artificial intelligence circles. These areas were under increasing commercial pressure to provide devices which help and/or replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organi- tion on the one hand and to provide the theoretical basis for the computer program constructs on the other.
This series will include monographs and collections of studies devoted to the investigation and exploration of knowledge, information, and data processing systems of all kinds, no matter whether human, (other) animal, or machine. Its scope is intended to span the full range of interests from classical problems in the philosophy of mind and philosophical psychol ogy through issues in cognitive psychology and sociobiology (concerning the mental capabilities of other species) to ideas related to artificial intelligence and computer science. While primary emphasis will be placed upon theoretical, conceptual, and epistemological aspects of these problems and domains, empirical, experimental, and methodological studies will also appear from time to time. The problems posed by metaphor and analogy are among the most challenging that confront the field of knowledge representation. In this study, Eileen Way has drawn upon the combined resources of philosophy, psychology, and computer science in developing a systematic and illuminating theoretical framework for understanding metaphors and analogies. While her work provides solutions to difficult problems of knowledge representation, it goes much further by investigating some of the most important philosophical assumptions that prevail within artificial intelligence today. By exposing the limitations inherent in the assumption that languages are both literal and truth-functional, she has advanced our grasp of the nature of language itself. J.R.F."
Data-Driven Techniques in Speech Synthesis gives a first review of this new field. All areas of speech synthesis from text are covered, including text analysis, letter-to-sound conversion, prosodic marking and extraction of parameters to drive synthesis hardware. Fuelled by cheap computer processing and memory, the fields of machine learning in particular and artificial intelligence in general are increasingly exploiting approaches in which large databases act as implicit knowledge sources, rather than explicit rules manually written by experts. Speech synthesis is one application area where the new approach is proving powerfully effective, the reliance upon fragile specialist knowledge having hindered its development in the past. This book provides the first review of the new topic, with contributions from leading international experts. Data-Driven Techniques in Speech Synthesis is at the leading edge of current research, written by well respected experts in the field. The text is concise and accessible, and guides the reader through the new technology. The book will primarily appeal to research engineers and scientists working in the area of speech synthesis. However, it will also be of interest to speech scientists and phoneticians as well as managers and project leaders in the telecommunications industry who need an appreciation of the capabilities and potential of modern speech synthesis technology.
It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then. The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as weIl as to consumers of logic in many applied areas. The main logic artiele in the Encyelopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good. ! The first edition was the second handbook published for the logic commu nity. It followed the North Holland one volume Handbook 0/ Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook 0/ Philosophical Logic, published 1983-1989 came at a fortunate at the evolution of logic. This was the time when logic temporal junction was gaining ground in computer science and artificial intelligence cireles. These areas were under increasing commercial pressure to provide devices which help andjor replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organisa tion on the one hand and to provide the theoretical basis for the computer program constructs on the other.
Intensional logic has emerged, since the 1960' s, as a powerful theoretical and practical tool in such diverse disciplines as computer science, artificial intelligence, linguistics, philosophy and even the foundations of mathematics. The present volume is a collection of carefully chosen papers, giving the reader a taste of the frontline state of research in intensional logics today. Most papers are representative of new ideas and/or new research themes. The collection would benefit the researcher as well as the student. This book is a most welcome addition to our series. The Editors CONTENTS PREFACE IX JOHAN VAN BENTHEM AND NATASHA ALECHINA Modal Quantification over Structured Domains PATRICK BLACKBURN AND WILFRIED MEYER-VIOL Modal Logic and Model-Theoretic Syntax 29 RUY J. G. B. DE QUEIROZ AND DOV M. GABBAY The Functional Interpretation of Modal Necessity 61 VLADIMIR V. RYBAKOV Logics of Schemes for First-Order Theories and Poly-Modal Propositional Logic 93 JERRY SELIGMAN The Logic of Correct Description 107 DIMITER VAKARELOV Modal Logics of Arrows 137 HEINRICH WANSING A Full-Circle Theorem for Simple Tense Logic 173 MICHAEL ZAKHARYASCHEV Canonical Formulas for Modal and Superintuitionistic Logics: A Short Outline 195 EDWARD N. ZALTA 249 The Modal Object Calculus and its Interpretation NAME INDEX 281 SUBJECT INDEX 285 PREFACE Intensional logic has many faces. In this preface we identify some prominent ones without aiming at completeness.
This book celebrates the work of Yorick Wilks in the form of a selection of his papers which are intended to reflect the range and depth of his work. The volume accompanies a Festschrift which celebrates his contribution to the fields of Computational Linguistics and Artificial Intelligence. The selected papers reflect Yorick's contribution to both practical and theoretical aspects of automatic language processing.
Web Personalization can be de?ned as any set of actions that can tailor the Webexperiencetoaparticularuserorsetofusers. Toachievee?ectivepers- alization, organizationsmustrelyonallavailabledata, includingtheusageand click-stream data (re?ecting user behaviour), the site content, the site str- ture, domainknowledge, aswellasuserdemographicsandpro?les. Inaddition, e?cient and intelligent techniques are needed to mine this data for actionable knowledge, and to e?ectively use the discovered knowledge to enhance the users' Web experience. These techniques must address important challenges emanating from the size and the heterogeneous nature of the data itself, as wellasthedynamicnatureofuserinteractionswiththeWeb. Thesechallenges include the scalability of the personalization solutions, data integration, and successful integration of techniques from machine learning, information - trievaland?ltering, databases, agentarchitectures, knowledgerepresentation, data mining, text mining, statistics, user modelling and human-computer - teraction. The Semantic Web adds one more dimension to this. The workshop will focus on the semantic web approach to personalization and adaptation. The Web has been formed to be an integral part of numerous applications inwhichauserinteractswithaserviceprovider, productsellers, governmental organisations, friends and colleagues. Content and services are available at di?erent sources and places. Hence, Web applications need to combine all available knowledge in order to form personalized, user-friendly, and busine- optimal servi
In the last years, it was observed an increasing interest of computer scientists in the structure of biological molecules and the way how they can be manipulated in vitro in order to define theoretical models of computation based on genetic engineering tools. Along the same lines, a parallel interest is growing regarding the process of evolution of living organisms. Much of the current data for genomes are expressed in the form of maps which are now becoming available and permit the study of the evolution of organisms at the scale of genome for the first time. On the other hand, there is an active trend nowadays throughout the field of computational biology toward abstracted, hierarchical views of biological sequences, which is very much in the spirit of computational linguistics. In the last decades, results and methods in the field of formal language theory that might be applied to the description of biological sequences were pointed out.
In the fall of 1985 Carnegie Mellon University established a Department of Philosophy. The focus of the department is logic broadly conceived, philos- ophy of science, in particular of the social sciences, and linguistics. To mark the inauguration of the department, a daylong celebration was held on April 5, 1986. This celebration consisted of two keynote addresses by Patrick Sup- pes and Thomas Schwartz, seminars directed by members of the department, and a panel discussion on the computational model of mind moderated by Dana S. Scott. The various contributions, in modified and expanded form, are the core of this collection of essays, and they are, I believe, of more than parochial interest: they turn attention to substantive and reflective interdis- ciplinary work. The collection is divided into three parts. The first part gives perspec- tives (i) on general features of the interdisciplinary enterprise in philosophy (by Patrick Suppes, Thomas Schwartz, Herbert A. Simon, and Clark Gly- mour) , and (ii) on a particular topic that invites such interaction, namely computational models of the mind (with contributions by Gilbert Harman, John Haugeland, Jay McClelland, and Allen Newell). The second part con- tains (mostly informal) reports on concrete research done within that enter- prise; the research topics range from decision theory and the philosophy of economics through foundational problems in mathematics to issues in aes- thetics and computational linguistics. The third part is a postscriptum by Isaac Levi, analyzing directions of (computational) work from his perspective.
Sentiment analysis and opinion mining is the field of study that analyzes people's opinions, sentiments, evaluations, attitudes, and emotions from written language. It is one of the most active research areas in natural language processing and is also widely studied in data mining, Web mining, and text mining. In fact, this research has spread outside of computer science to the management sciences and social sciences due to its importance to business and society as a whole. The growing importance of sentiment analysis coincides with the growth of social media such as reviews, forum discussions, blogs, micro-blogs, Twitter, and social networks. For the first time in human history, we now have a huge volume of opinionated data recorded in digital form for analysis. Sentiment analysis systems are being applied in almost every business and social domain because opinions are central to almost all human activities and are key influencers of our behaviors. Our beliefs and perceptions of reality, and the choices we make, are largely conditioned on how others see and evaluate the world. For this reason, when we need to make a decision we often seek out the opinions of others. This is true not only for individuals but also for organizations. This book is a comprehensive introductory and survey text. It covers all important topics and the latest developments in the field with over 400 references. It is suitable for students, researchers and practitioners who are interested in social media analysis in general and sentiment analysis in particular. Lecturers can readily use it in class for courses on natural language processing, social media analysis, text mining, and data mining. Lecture slides are also available online. Table of Contents: Preface / Sentiment Analysis: A Fascinating Problem / The Problem of Sentiment Analysis / Document Sentiment Classification / Sentence Subjectivity and Sentiment Classification / Aspect-Based Sentiment Analysis / Sentiment Lexicon Generation / Opinion Summarization / Analysis of Comparative Opinions / Opinion Search and Retrieval / Opinion Spam Detection / Quality of Reviews / Concluding Remarks / Bibliography / Author Biography |
You may like...
|