![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This book constitutes the refereed proceedings of the 15th and 16th International Conference on Formal Grammar 2010 and 2011, collocated with the European Summer School in Logic, Language and Information in July 2010/2011. The 19 revised full papers were carefully reviewed and selected from a total of 50 submissions. The papers papers deal with the following topics: formal and computational phonology, morphology, syntax, semantics and pragmatics; model-theoretic and proof-theoretic methods in linguistics; logical aspects of linguistic structure; constraint-based and resource-sensitive approaches to grammar; learnability of formal grammar; integration of stochastic and symbolic models of grammar; foundational, methodological and architectural issues in grammar; mathematical foundations of statistical approaches to linguistic analysis.
This manual contains an up-to-date description of the existing anthologies (with a linguistic focus) and corpora that have so far been compiled for the different Romance languages. This description takes into account both the standard languages and a selection of well-attested diatopic and diastratic varieties as well as Romance-based Creoles. Representative texts and detailed commentaries are provided for all the languages and varieties discussed.
This book constitutes the refereed proceedings of the 4th Language and Technology Conference: Challenges for Computer Science and Linguistics, LTC 2009, held in Poznan, Poland, in November 2009. The 52 revised and in many cases substantially extended papers presented in this volume were carefully reviewed and selected from 103 submissions. The contributions are organized in topical sections on speech processing, computational morphology/lexicography, parsing, computational semantics, dialogue modeling and processing, digital language resources, WordNet, document processing, information processing, and machine translation.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This two-volume set, consisting of LNCS 6608 and LNCS 6609, constitutes the thoroughly refereed proceedings of the 12th International Conference on Computer Linguistics and Intelligent Processing, held in Tokyo, Japan, in February 2011. The 74 full papers, presented together with 4 invited papers, were carefully reviewed and selected from 298 submissions. The contents have been ordered according to the following topical sections: lexical resources; syntax and parsing; part-of-speech tagging and morphology; word sense disambiguation; semantics and discourse; opinion mining and sentiment detection; text generation; machine translation and multilingualism; information extraction and information retrieval; text categorization and classification; summarization and recognizing textual entailment; authoring aid, error correction, and style analysis; and speech recognition and generation.
Geometric Data Analysis (GDA) is the name suggested by P. Suppes (Stanford University) to designate the approach to Multivariate Statistics initiated by Benzecri as Correspondence Analysis, an approach that has become more and more used and appreciated over the years. This book presents the full formalization of GDA in terms of linear algebra - the most original and far-reaching consequential feature of the approach - and shows also how to integrate the standard statistical tools such as Analysis of Variance, including Bayesian methods. Chapter 9, Research Case Studies, is nearly a book in itself; it presents the methodology in action on three extensive applications, one for medicine, one from political science, and one from education (data borrowed from the Stanford computer-based Educational Program for Gifted Youth ). Thus the readership of the book concerns both mathematicians interested in the applications of mathematics, and researchers willing to master an exceptionally powerful approach of statistical data analysis.
Entropy Guided Transformation Learning: Algorithms and Applications (ETL) presents a machine learning algorithm for classification tasks. ETL generalizes Transformation Based Learning (TBL) by solving the TBL bottleneck: the construction of good template sets. ETL automatically generates templates using Decision Tree decomposition. The authors describe ETL Committee, an ensemble method that uses ETL as the base learner. Experimental results show that ETL Committee improves the effectiveness of ETL classifiers. The application of ETL is presented to four Natural Language Processing (NLP) tasks: part-of-speech tagging, phrase chunking, named entity recognition and semantic role labeling. Extensive experimental results demonstrate that ETL is an effective way to learn accurate transformation rules, and shows better results than TBL with handcrafted templates for the four tasks. By avoiding the use of handcrafted templates, ETL enables the use of transformation rules to a greater range of tasks. Suitable for both advanced undergraduate and graduate courses, Entropy Guided Transformation Learning: Algorithms and Applications provides a comprehensive introduction to ETL and its NLP applications.
The subject of Time has a wide intellectual appeal across different dis ciplines. This has shown in the variety of reactions received from readers of the first edition of the present Book. Many have reacted to issues raised in its philosophical discussions, while some have even solved a number of the open technical questions raised in the logical elaboration of the latter. These results will be recorded below, at a more convenient place. In the seven years after the first publication, there have been some noticeable newer developments in the logical study of Time and temporal expressions. As far as Temporal Logic proper is concerned, it seems fair to say that these amount to an increase in coverage and sophistication, rather than further break-through innovation. In fact, perhaps the most significant sources of new activity have been the applied areas of Linguistics and Computer Science (including Artificial Intelligence), where many intriguing new ideas have appeared presenting further challenges to temporal logic. Now, since this Book has a rather tight composition, it would have been difficult to interpolate this new material without endangering intelligibility."
This volume contains a selection of papers presented at a Seminar on Intensional Logic held at the University of Amsterdam during the period September 1990-May 1991. Modal logic, either as a topic or as a tool, is common to most of the papers in this volume. A number of the papers are con cerned with what may be called well-known or traditional modal systems, but, as a quick glance through this volume will reveal, this by no means implies that they walk the beaten tracks. In deed, such contributions display new directions, new results, and new techniques to obtain familiar results. Other papers in this volume are representative examples of a current trend in modal logic: the study of extensions or adaptations of the standard sys tems that have been introduced to overcome various shortcomings of the latter, especially their limited expressive power. Finally, there is another major theme that can be discerned in the vol ume, a theme that may be described by the slogan 'representing changing information. ' Papers falling under this heading address long-standing issues in the area, or present a systematic approach, while a critical survey and a report contributing new techniques are also included. The bulk of the papers on pure modal logic deal with theoreti calor even foundational aspects of modal systems."
Franciska de Jong and Jan Landsbergen Jan Landsbergen 2 A compositional definition of the translation relation Jan Odijk 3 M-grammars Jan Landsbergen and Franciska de Jong 4 The translation process Lisette Appelo 5 The Rosetta characteristics Joep Rous and Harm Smit 6 Morphology Jan Odijk, Harm Smit and Petra de Wit 7 Dictionaries Jan Odijk 8 Syntactic rules Modular and controlled Lisette Appelo 9 M-grammars Compositionality and syntactic Jan Odijk 10 generalisations Jan Odijk and Elena Pinillos Bartolome 11 Incorporating theoretical linguistic insights Lisette Appelo 12 Divergences between languages Lisette Appelo 13 Categorial divergences Translation of temporal Lisette Appelo 14 expressions Andre Schenk 15 Idioms and complex predicates Lisette Appelo and Elly van Munster 16 Scope and negation Rene Leermakers and Jan Landsbergen 17 The formal definition of M-grammars Rene Leermakers and Joep Rous 18 An attribute grammar view Theo Janssen 19 An algebraic view Rene Leermakers 20 Software engineering aspects Jan Landsbergen 21 Conclusion Contents 1 1 Introduction 1. 1 Knowledge needed for translation . . . . . . . . . . . 2 1. 1. 1 Knowledge of language and world knowledge 2 1. 1. 2 Formalisation. . . . . . . . . . . . . . . . . 4 1. 1. 3 The underestimation of linguistic problems . 5 1. 1. 4 The notion of possible translation . 5 1. 2 Applications. . . . . . . . . . . 7 1. 3 A linguistic perspective on MT 9 1. 3. 1 Scope of the project 9 1. 3. 2 Scope of the book 11 1. 4 Organisation of the book . .
1. Metaphors and Logic Metaphors are among the most vigorous offspring of the creative mind; but their vitality springs from the fact that they are logical organisms in the ecology of l- guage. I aim to use logical techniques to analyze the meanings of metaphors. My goal here is to show how contemporary formal semantics can be extended to handle metaphorical utterances. What distinguishes this work is that it focuses intensely on the logical aspects of metaphors. I stress the role of logic in the generation and int- pretation of metaphors. While I don't presuppose any formal training in logic, some familiarity with philosophical logic (the propositional calculus and the predicate c- culus) is helpful. Since my theory makes great use of the notion of structure, I refer to it as the structural theory of m etaphor (STM). STM is a semant ic theory of m etaphor : if STM is correct, then metaphors are cognitively meaningful and are n- trivially logically linked with truth. I aim to extend possible worlds semantics to handle metaphors. I'll argue that some sentences in natural languages like English have multiple meanings: "Juliet is the sun" has (at least) two meanings: the literal meaning "(Juliet is the sunkIT" and the metaphorical meaning "(Juliet is the sun)MET". Each meaning is a function from (possible) worlds to truth-values. I deny that these functions are identical; I deny that the metaphorical function is necessarily false or necessarily true.
This volume is a selection of papers presented at a workshop entitled Predicative Forms in Natural Language and in Lexical Knowledge Bases organized in Toulouse in August 1996. A predicate is a named relation that exists among one or more arguments. In natural language, predicates are realized as verbs, prepositions, nouns and adjectives, to cite the most frequent ones. Research on the identification, organization, and semantic representa tion of predicates in artificial intelligence and in language processing is a very active research field. The emergence of new paradigms in theoretical language processing, the definition of new problems and the important evol ution of applications have, in fact, stimulated much interest and debate on the role and nature of predicates in naturallangage. From a broad theoret ical perspective, the notion of predicate is central to research on the syntax semantics interface, the generative lexicon, the definition of ontology-based semantic representations, and the formation of verb semantic classes. From a computational perspective, the notion of predicate plays a cent ral role in a number of applications including the design of lexical knowledge bases, the development of automatic indexing systems for the extraction of structured semantic representations, and the creation of interlingual forms in machine translation."
Parsing Efficiency is crucial when building practical natural language systems. 'Ibis is especially the case for interactive systems such as natural language database access, interfaces to expert systems and interactive machine translation. Despite its importance, parsing efficiency has received little attention in the area of natural language processing. In the areas of compiler design and theoretical computer science, on the other hand, parsing algorithms 3 have been evaluated primarily in terms of the theoretical worst case analysis (e.g. lXn", and very few practical comparisons have been made. This book introduces a context-free parsing algorithm that parses natural language more efficiently than any other existing parsing algorithms in practice. Its feasibility for use in practical systems is being proven in its application to Japanese language interface at Carnegie Group Inc., and to the continuous speech recognition project at Carnegie-Mellon University. This work was done while I was pursuing a Ph.D degree at Carnegie-Mellon University. My advisers, Herb Simon and Jaime Carbonell, deserve many thanks for their unfailing support, advice and encouragement during my graduate studies. I would like to thank Phil Hayes and Ralph Grishman for their helpful comments and criticism that in many ways improved the quality of this book. I wish also to thank Steven Brooks for insightful comments on theoretical aspects of the book (chapter 4, appendices A, B and C), and Rich Thomason for improving the linguistic part of tile book (the very beginning of section 1.1).
l This book evolved from the ARCADE evaluation exercise that started in 1995. The project's goal is to evaluate alignment systems for parallel texts, i. e., texts accompanied by their translation. Thirteen teams from various places around the world have participated so far and for the first time, some ten to fifteen years after the first alignment techniques were designed, the community has been able to get a clear picture of the behaviour of alignment systems. Several chapters in this book describe the details of competing systems, and the last chapter is devoted to the description of the evaluation protocol and results. The remaining chapters were especially commissioned from researchers who have been major figures in the field in recent years, in an attempt to address a wide range of topics that describe the state of the art in parallel text processing and use. As I recalled in the introduction, the Rosetta stone won eternal fame as the prototype of parallel texts, but such texts are probably almost as old as the invention of writing. Nowadays, parallel texts are electronic, and they are be coming an increasingly important resource for building the natural language processing tools needed in the "multilingual information society" that is cur rently emerging at an incredible speed. Applications are numerous, and they are expanding every day: multilingual lexicography and terminology, machine and human translation, cross-language information retrieval, language learning, etc."
Poland has played an enormous role in the development of mathematical logic. Leading Polish logicians, like Lesniewski, Lukasiewicz and Tarski, produced several works related to philosophical logic, a field covering different topics relevant to philosophical foundations of logic itself, as well as various individual sciences. This collection presents contemporary Polish work in philosophical logic which in many respects continue the Polish way of doing philosophical logic. This book will be of interest to logicians, mathematicians, philosophers, and linguists.
Data-Driven Techniques in Speech Synthesis gives a first review of this new field. All areas of speech synthesis from text are covered, including text analysis, letter-to-sound conversion, prosodic marking and extraction of parameters to drive synthesis hardware. Fuelled by cheap computer processing and memory, the fields of machine learning in particular and artificial intelligence in general are increasingly exploiting approaches in which large databases act as implicit knowledge sources, rather than explicit rules manually written by experts. Speech synthesis is one application area where the new approach is proving powerfully effective, the reliance upon fragile specialist knowledge having hindered its development in the past. This book provides the first review of the new topic, with contributions from leading international experts. Data-Driven Techniques in Speech Synthesis is at the leading edge of current research, written by well respected experts in the field. The text is concise and accessible, and guides the reader through the new technology. The book will primarily appeal to research engineers and scientists working in the area of speech synthesis. However, it will also be of interest to speech scientists and phoneticians as well as managers and project leaders in the telecommunications industry who need an appreciation of the capabilities and potential of modern speech synthesis technology.
This series will include monographs and collections of studies devoted to the investigation and exploration of knowledge, information, and data processing systems of all kinds, no matter whether human, (other) animal, or machine. Its scope is intended to span the full range of interests from classical problems in the philosophy of mind and philosophical psychol ogy through issues in cognitive psychology and sociobiology (concerning the mental capabilities of other species) to ideas related to artificial intelligence and computer science. While primary emphasis will be placed upon theoretical, conceptual, and epistemological aspects of these problems and domains, empirical, experimental, and methodological studies will also appear from time to time. The problems posed by metaphor and analogy are among the most challenging that confront the field of knowledge representation. In this study, Eileen Way has drawn upon the combined resources of philosophy, psychology, and computer science in developing a systematic and illuminating theoretical framework for understanding metaphors and analogies. While her work provides solutions to difficult problems of knowledge representation, it goes much further by investigating some of the most important philosophical assumptions that prevail within artificial intelligence today. By exposing the limitations inherent in the assumption that languages are both literal and truth-functional, she has advanced our grasp of the nature of language itself. J.R.F."
This book is based on contributions to the Seventh European Summer School on Language and Speech Communication that was held at KTH in Stockholm, Sweden, in July of 1999 under the auspices of the European Language and Speech Network (ELSNET). The topic of the summer school was "Multimodality in Language and Speech Systems" (MiLaSS). The issue of multimodality in interpersonal, face-to-face communication has been an important research topic for a number of years. With the increasing sophistication of computer-based interactive systems using language and speech, the topic of multimodal interaction has received renewed interest both in terms of human-human interaction and human-machine interaction. Nine lecturers contri buted to the summer school with courses on specialized topics ranging from the technology and science of creating talking faces to human-human communication, which is mediated by computer for the handicapped. Eight of the nine lecturers are represented in this book. The summer school attracted more than 60 participants from Europe, Asia and North America representing not only graduate students but also senior researchers from both academia and industry."
It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then. The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as weIl as to consumers of logic in many applied areas. The main logic artiele in the Encyelopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good. ! The first edition was the second handbook published for the logic commu nity. It followed the North Holland one volume Handbook 0/ Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook 0/ Philosophical Logic, published 1983-1989 came at a fortunate at the evolution of logic. This was the time when logic temporal junction was gaining ground in computer science and artificial intelligence cireles. These areas were under increasing commercial pressure to provide devices which help andjor replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organisa tion on the one hand and to provide the theoretical basis for the computer program constructs on the other.
1. 1 OBJECTIVES The main objective of this joint work is to bring together some ideas that have played central roles in two disparate theoretical traditions in order to con tribute to a better understanding of the relationship between focus and the syn tactic and semantic structure of sentences. Within the Prague School tradition and the branch of its contemporary development represented by Hajicova and Sgall (HS in the sequel), topic-focus articulation has long been a central object of study, and it has long been a tenet of Prague school linguistics that topic-focus structure has systematic relevance to meaning. Within the formal semantics tradition represented by Partee (BHP in the sequel), focus has much more recently become an area of concerted investigation, but a number of the semantic phenomena to which focus is relevant have been extensively investi gated and given explicit compositional semantic-analyses. The emergence of 'tripartite structures' (see Chapter 2) in formal semantics and the partial simi larities that can be readily observed between some aspects of tripartite structures and some aspects of Praguian topic-focus articulation have led us to expect that a closer investigation of the similarities and differences in these different theoretical constructs would be a rewarding undertaking with mutual benefits for the further development of our respective theories and potential benefit for the study of semantic effects of focus in other theories as well."
Intensional logic has emerged, since the 1960' s, as a powerful theoretical and practical tool in such diverse disciplines as computer science, artificial intelligence, linguistics, philosophy and even the foundations of mathematics. The present volume is a collection of carefully chosen papers, giving the reader a taste of the frontline state of research in intensional logics today. Most papers are representative of new ideas and/or new research themes. The collection would benefit the researcher as well as the student. This book is a most welcome addition to our series. The Editors CONTENTS PREFACE IX JOHAN VAN BENTHEM AND NATASHA ALECHINA Modal Quantification over Structured Domains PATRICK BLACKBURN AND WILFRIED MEYER-VIOL Modal Logic and Model-Theoretic Syntax 29 RUY J. G. B. DE QUEIROZ AND DOV M. GABBAY The Functional Interpretation of Modal Necessity 61 VLADIMIR V. RYBAKOV Logics of Schemes for First-Order Theories and Poly-Modal Propositional Logic 93 JERRY SELIGMAN The Logic of Correct Description 107 DIMITER VAKARELOV Modal Logics of Arrows 137 HEINRICH WANSING A Full-Circle Theorem for Simple Tense Logic 173 MICHAEL ZAKHARYASCHEV Canonical Formulas for Modal and Superintuitionistic Logics: A Short Outline 195 EDWARD N. ZALTA 249 The Modal Object Calculus and its Interpretation NAME INDEX 281 SUBJECT INDEX 285 PREFACE Intensional logic has many faces. In this preface we identify some prominent ones without aiming at completeness.
Locality in WH Quantification argues that Logical Form, the level that mediates between syntax and semantics, is derived from S-structure by local movement. The primary data for the claim of locality at LF is drawn from Hindi but English data is used in discussing the semantics of questions and relative clauses. The book takes a cross-linguistic perspective showing how the Hindi and English facts can be brought to bear on the theory of universal grammar. There are several phenomena generally thought to involve long-distance dependencies at LF, such as scope marking, long-distance list answers and correlatives. In this book they are handled by explicating novel types of local relationships that interrogative and relative clauses can enter. A more articulated semantics is shown leading to a simpler syntax. Among other issues addressed is the switch from uniqueness/maximality effects in single wh constructions to list readings in multiple wh constructions. These effects are captured by adapting the treatment of wh expressions as quantifying over functions to the cases of multiple wh questions and correlatives. List readings due to functional dependencies are systematically distinguished from those that are based on plurality.
Parsing technology is concerned with finding syntactic structure in language. In parsing we have to deal with incomplete and not necessarily accurate formal descriptions of natural languages. Robustness and efficiency are among the main issuesin parsing. Corpora can be used to obtain frequency information about language use. This allows probabilistic parsing, an approach that aims at both robustness and efficiency increase. Approximation techniques, to be applied at the level of language description, parsing strategy, and syntactic representation, have the same objective. Approximation at the level of syntactic representation is also known as underspecification, a traditional technique to deal with syntactic ambiguity. In this book new parsing technologies are collected that aim at attacking the problems of robustness and efficiency by exactly these techniques: the design of probabilistic grammars and efficient probabilistic parsing algorithms, approximation techniques applied to grammars and parsers to increase parsing efficiency, and techniques for underspecification and the integration of semantic information in the syntactic analysis to deal with massive ambiguity. The book gives a state-of-the-art overview of current research and development in parsing technologies. In its chapters we see how probabilistic methods have entered the toolbox of computational linguistics in order to be applied in both parsing theory and parsing practice. The book is both a unique reference for researchers and an introduction to the field for interested graduate students.
In the last years, it was observed an increasing interest of computer scientists in the structure of biological molecules and the way how they can be manipulated in vitro in order to define theoretical models of computation based on genetic engineering tools. Along the same lines, a parallel interest is growing regarding the process of evolution of living organisms. Much of the current data for genomes are expressed in the form of maps which are now becoming available and permit the study of the evolution of organisms at the scale of genome for the first time. On the other hand, there is an active trend nowadays throughout the field of computational biology toward abstracted, hierarchical views of biological sequences, which is very much in the spirit of computational linguistics. In the last decades, results and methods in the field of formal language theory that might be applied to the description of biological sequences were pointed out. |
You may like...
The Natural Language for Artificial…
Dioneia Motta Monte-Serrat, Carlo Cattani
Paperback
R2,767
Discovery Miles 27 670
Spelling and Writing Words - Theoretical…
Cyril Perret, Thierry Olive
Hardcover
R2,803
Discovery Miles 28 030
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, …
Paperback
R2,570
Discovery Miles 25 700
Corpus Stylistics in Heart of Darkness…
Lorenzo Mastropierro
Hardcover
R4,312
Discovery Miles 43 120
Linguistic Inquiries into Donald…
Ulrike Schneider, Matthias Eitelmann
Hardcover
R3,992
Discovery Miles 39 920
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,569
Discovery Miles 45 690
|