![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
This book presents the state of the art in the areas of ontology evolution and knowledge-driven multimedia information extraction, placing an emphasis on how the two can be combined to bridge the semantic gap. This was also the goal of the EC-sponsored BOEMIE (Bootstrapping Ontology Evolution with Multimedia Information Extraction) project, to which the authors of this book have all contributed. The book addresses researchers and practitioners in the field of computer science and more specifically in knowledge representation and management, ontology evolution, and information extraction from multimedia data. It may also constitute an excellent guide to students attending courses within a computer science study program, addressing information processing and extraction from any type of media (text, images, and video). Among other things, the book gives concrete examples of how several of the methods discussed can be applied to athletics (track and field) events.
The use of literature in second language teaching has been advocated for a number of years, yet despite this there have only been a limited number of studies which have sought to investigate its effects. Fewer still have focused on its potential effects as a model of spoken language or as a vehicle to develop speaking skills. Drawing upon multiple research studies, this volume fills that gap to explore how literature is used to develop speaking skills in second language learners. The volume is divided into two sections: literature and spoken language and literature and speaking skills. The first section focuses on studies exploring the use of literature to raise awareness of spoken language features, whilst the second investigates its potential as a vehicle to develop speaking skills. Each section contains studies with different designs and in various contexts including China, Japan and the UK. The research designs used mean that the chapters contain clear implications for classroom pedagogy and research in different contexts.
A major part of natural language processing now depends on the use of text data to build linguistic analyzers. We consider statistical, computational approaches to modeling linguistic structure. We seek to unify across many approaches and many kinds of linguistic structures. Assuming a basic understanding of natural language processing and/or machine learning, we seek to bridge the gap between the two fields. Approaches to decoding (i.e., carrying out linguistic structure prediction) and supervised and unsupervised learning of models that predict discrete structures as outputs are the focus. We also survey natural language processing problems to which these methods are being applied, and we address related topics in probabilistic inference, optimization, and experimental methodology. Table of Contents: Representations and Linguistic Data / Decoding: Making Predictions / Learning Structure from Annotated Data / Learning Structure from Incomplete Data / Beyond Decoding: Inference
This book constitutes the refereed proceedings of the 4th Language and Technology Conference: Challenges for Computer Science and Linguistics, LTC 2009, held in Poznan, Poland, in November 2009. The 52 revised and in many cases substantially extended papers presented in this volume were carefully reviewed and selected from 103 submissions. The contributions are organized in topical sections on speech processing, computational morphology/lexicography, parsing, computational semantics, dialogue modeling and processing, digital language resources, WordNet, document processing, information processing, and machine translation.
This two-volume set, consisting of LNCS 6608 and LNCS 6609, constitutes the thoroughly refereed proceedings of the 12th International Conference on Computer Linguistics and Intelligent Processing, held in Tokyo, Japan, in February 2011. The 74 full papers, presented together with 4 invited papers, were carefully reviewed and selected from 298 submissions. The contents have been ordered according to the following topical sections: lexical resources; syntax and parsing; part-of-speech tagging and morphology; word sense disambiguation; semantics and discourse; opinion mining and sentiment detection; text generation; machine translation and multilingualism; information extraction and information retrieval; text categorization and classification; summarization and recognizing textual entailment; authoring aid, error correction, and style analysis; and speech recognition and generation.
Geometric Data Analysis (GDA) is the name suggested by P. Suppes (Stanford University) to designate the approach to Multivariate Statistics initiated by Benzecri as Correspondence Analysis, an approach that has become more and more used and appreciated over the years. This book presents the full formalization of GDA in terms of linear algebra - the most original and far-reaching consequential feature of the approach - and shows also how to integrate the standard statistical tools such as Analysis of Variance, including Bayesian methods. Chapter 9, Research Case Studies, is nearly a book in itself; it presents the methodology in action on three extensive applications, one for medicine, one from political science, and one from education (data borrowed from the Stanford computer-based Educational Program for Gifted Youth ). Thus the readership of the book concerns both mathematicians interested in the applications of mathematics, and researchers willing to master an exceptionally powerful approach of statistical data analysis.
This two-volume set, consisting of LNCS 6608 and LNCS 6609, constitutes the thoroughly refereed proceedings of the 12th International Conference on Computer Linguistics and Intelligent Processing, held in Tokyo, Japan, in February 2011. The 74 full papers, presented together with 4 invited papers, were carefully reviewed and selected from 298 submissions. The contents have been ordered according to the following topical sections: lexical resources; syntax and parsing; part-of-speech tagging and morphology; word sense disambiguation; semantics and discourse; opinion mining and sentiment detection; text generation; machine translation and multilingualism; information extraction and information retrieval; text categorization and classification; summarization and recognizing textual entailment; authoring aid, error correction, and style analysis; and speech recognition and generation.
This volume contains a selection of papers presented at a Seminar on Intensional Logic held at the University of Amsterdam during the period September 1990-May 1991. Modal logic, either as a topic or as a tool, is common to most of the papers in this volume. A number of the papers are con cerned with what may be called well-known or traditional modal systems, but, as a quick glance through this volume will reveal, this by no means implies that they walk the beaten tracks. In deed, such contributions display new directions, new results, and new techniques to obtain familiar results. Other papers in this volume are representative examples of a current trend in modal logic: the study of extensions or adaptations of the standard sys tems that have been introduced to overcome various shortcomings of the latter, especially their limited expressive power. Finally, there is another major theme that can be discerned in the vol ume, a theme that may be described by the slogan 'representing changing information. ' Papers falling under this heading address long-standing issues in the area, or present a systematic approach, while a critical survey and a report contributing new techniques are also included. The bulk of the papers on pure modal logic deal with theoreti calor even foundational aspects of modal systems."
The subject of Time has a wide intellectual appeal across different dis ciplines. This has shown in the variety of reactions received from readers of the first edition of the present Book. Many have reacted to issues raised in its philosophical discussions, while some have even solved a number of the open technical questions raised in the logical elaboration of the latter. These results will be recorded below, at a more convenient place. In the seven years after the first publication, there have been some noticeable newer developments in the logical study of Time and temporal expressions. As far as Temporal Logic proper is concerned, it seems fair to say that these amount to an increase in coverage and sophistication, rather than further break-through innovation. In fact, perhaps the most significant sources of new activity have been the applied areas of Linguistics and Computer Science (including Artificial Intelligence), where many intriguing new ideas have appeared presenting further challenges to temporal logic. Now, since this Book has a rather tight composition, it would have been difficult to interpolate this new material without endangering intelligibility."
Franciska de Jong and Jan Landsbergen Jan Landsbergen 2 A compositional definition of the translation relation Jan Odijk 3 M-grammars Jan Landsbergen and Franciska de Jong 4 The translation process Lisette Appelo 5 The Rosetta characteristics Joep Rous and Harm Smit 6 Morphology Jan Odijk, Harm Smit and Petra de Wit 7 Dictionaries Jan Odijk 8 Syntactic rules Modular and controlled Lisette Appelo 9 M-grammars Compositionality and syntactic Jan Odijk 10 generalisations Jan Odijk and Elena Pinillos Bartolome 11 Incorporating theoretical linguistic insights Lisette Appelo 12 Divergences between languages Lisette Appelo 13 Categorial divergences Translation of temporal Lisette Appelo 14 expressions Andre Schenk 15 Idioms and complex predicates Lisette Appelo and Elly van Munster 16 Scope and negation Rene Leermakers and Jan Landsbergen 17 The formal definition of M-grammars Rene Leermakers and Joep Rous 18 An attribute grammar view Theo Janssen 19 An algebraic view Rene Leermakers 20 Software engineering aspects Jan Landsbergen 21 Conclusion Contents 1 1 Introduction 1. 1 Knowledge needed for translation . . . . . . . . . . . 2 1. 1. 1 Knowledge of language and world knowledge 2 1. 1. 2 Formalisation. . . . . . . . . . . . . . . . . 4 1. 1. 3 The underestimation of linguistic problems . 5 1. 1. 4 The notion of possible translation . 5 1. 2 Applications. . . . . . . . . . . 7 1. 3 A linguistic perspective on MT 9 1. 3. 1 Scope of the project 9 1. 3. 2 Scope of the book 11 1. 4 Organisation of the book . .
In this book we address robustness issues at the speech recognition and natural language parsing levels, with a focus on feature extraction and noise robust recognition, adaptive systems, language modeling, parsing, and natural language understanding. This book attempts to give a clear overview of the main technologies used in language and speech processing, along with an extensive bibliography to enable topics of interest to be pursued further. It also brings together speech and language technologies often considered separately. Robustness in Language and Speech Technology serves as a valuable reference and although not intended as a formal university textbook, contains some material that can be used for a course at the graduate or undergraduate level.
Computational Models of Mixed-Initiative Interaction brings together research that spans several disciplines related to artificial intelligence, including natural language processing, information retrieval, machine learning, planning, and computer-aided instruction, to account for the role that mixed initiative plays in the design of intelligent systems. The ten contributions address the single issue of how control of an interaction should be managed when abilities needed to solve a problem are distributed among collaborating agents. Managing control of an interaction among humans and computers to gather and assemble knowledge and expertise is a major challenge that must be met to develop machines that effectively collaborate with humans. This is the first collection to specifically address this issue.
1. Metaphors and Logic Metaphors are among the most vigorous offspring of the creative mind; but their vitality springs from the fact that they are logical organisms in the ecology of l- guage. I aim to use logical techniques to analyze the meanings of metaphors. My goal here is to show how contemporary formal semantics can be extended to handle metaphorical utterances. What distinguishes this work is that it focuses intensely on the logical aspects of metaphors. I stress the role of logic in the generation and int- pretation of metaphors. While I don't presuppose any formal training in logic, some familiarity with philosophical logic (the propositional calculus and the predicate c- culus) is helpful. Since my theory makes great use of the notion of structure, I refer to it as the structural theory of m etaphor (STM). STM is a semant ic theory of m etaphor : if STM is correct, then metaphors are cognitively meaningful and are n- trivially logically linked with truth. I aim to extend possible worlds semantics to handle metaphors. I'll argue that some sentences in natural languages like English have multiple meanings: "Juliet is the sun" has (at least) two meanings: the literal meaning "(Juliet is the sunkIT" and the metaphorical meaning "(Juliet is the sun)MET". Each meaning is a function from (possible) worlds to truth-values. I deny that these functions are identical; I deny that the metaphorical function is necessarily false or necessarily true.
This volume is a selection of papers presented at a workshop entitled Predicative Forms in Natural Language and in Lexical Knowledge Bases organized in Toulouse in August 1996. A predicate is a named relation that exists among one or more arguments. In natural language, predicates are realized as verbs, prepositions, nouns and adjectives, to cite the most frequent ones. Research on the identification, organization, and semantic representa tion of predicates in artificial intelligence and in language processing is a very active research field. The emergence of new paradigms in theoretical language processing, the definition of new problems and the important evol ution of applications have, in fact, stimulated much interest and debate on the role and nature of predicates in naturallangage. From a broad theoret ical perspective, the notion of predicate is central to research on the syntax semantics interface, the generative lexicon, the definition of ontology-based semantic representations, and the formation of verb semantic classes. From a computational perspective, the notion of predicate plays a cent ral role in a number of applications including the design of lexical knowledge bases, the development of automatic indexing systems for the extraction of structured semantic representations, and the creation of interlingual forms in machine translation."
Parsing Efficiency is crucial when building practical natural language systems. 'Ibis is especially the case for interactive systems such as natural language database access, interfaces to expert systems and interactive machine translation. Despite its importance, parsing efficiency has received little attention in the area of natural language processing. In the areas of compiler design and theoretical computer science, on the other hand, parsing algorithms 3 have been evaluated primarily in terms of the theoretical worst case analysis (e.g. lXn", and very few practical comparisons have been made. This book introduces a context-free parsing algorithm that parses natural language more efficiently than any other existing parsing algorithms in practice. Its feasibility for use in practical systems is being proven in its application to Japanese language interface at Carnegie Group Inc., and to the continuous speech recognition project at Carnegie-Mellon University. This work was done while I was pursuing a Ph.D degree at Carnegie-Mellon University. My advisers, Herb Simon and Jaime Carbonell, deserve many thanks for their unfailing support, advice and encouragement during my graduate studies. I would like to thank Phil Hayes and Ralph Grishman for their helpful comments and criticism that in many ways improved the quality of this book. I wish also to thank Steven Brooks for insightful comments on theoretical aspects of the book (chapter 4, appendices A, B and C), and Rich Thomason for improving the linguistic part of tile book (the very beginning of section 1.1).
Locality in WH Quantification argues that Logical Form, the level that mediates between syntax and semantics, is derived from S-structure by local movement. The primary data for the claim of locality at LF is drawn from Hindi but English data is used in discussing the semantics of questions and relative clauses. The book takes a cross-linguistic perspective showing how the Hindi and English facts can be brought to bear on the theory of universal grammar. There are several phenomena generally thought to involve long-distance dependencies at LF, such as scope marking, long-distance list answers and correlatives. In this book they are handled by explicating novel types of local relationships that interrogative and relative clauses can enter. A more articulated semantics is shown leading to a simpler syntax. Among other issues addressed is the switch from uniqueness/maximality effects in single wh constructions to list readings in multiple wh constructions. These effects are captured by adapting the treatment of wh expressions as quantifying over functions to the cases of multiple wh questions and correlatives. List readings due to functional dependencies are systematically distinguished from those that are based on plurality.
l This book evolved from the ARCADE evaluation exercise that started in 1995. The project's goal is to evaluate alignment systems for parallel texts, i. e., texts accompanied by their translation. Thirteen teams from various places around the world have participated so far and for the first time, some ten to fifteen years after the first alignment techniques were designed, the community has been able to get a clear picture of the behaviour of alignment systems. Several chapters in this book describe the details of competing systems, and the last chapter is devoted to the description of the evaluation protocol and results. The remaining chapters were especially commissioned from researchers who have been major figures in the field in recent years, in an attempt to address a wide range of topics that describe the state of the art in parallel text processing and use. As I recalled in the introduction, the Rosetta stone won eternal fame as the prototype of parallel texts, but such texts are probably almost as old as the invention of writing. Nowadays, parallel texts are electronic, and they are be coming an increasingly important resource for building the natural language processing tools needed in the "multilingual information society" that is cur rently emerging at an incredible speed. Applications are numerous, and they are expanding every day: multilingual lexicography and terminology, machine and human translation, cross-language information retrieval, language learning, etc."
One of the aims of Natural Language Processing is to facilitate .the use of computers by allowing their users to communicate in natural language. There are two important aspects to person-machine communication: understanding and generating. While natural language understanding has been a major focus of research, natural language generation is a relatively new and increasingly active field of research. This book presents an overview of the state of the art in natural language generation, describing both new results and directions for new research. The principal emphasis of natural language generation is not only to facili tate the use of computers but also to develop a computational theory of human language ability. In doing so, it is a tool for extending, clarifying and verifying theories that have been put forth in linguistics, psychology and sociology about how people communicate. A natural language generator will typically have access to a large body of knowledge from which to select information to present to users as well as numer of expressing it. Generating a text can thus be seen as a problem of ous ways decision-making under multiple constraints: constraints from the propositional knowledge at hand, from the linguistic tools available, from the communicative goals and intentions to be achieved, from the audience the text is aimed at and from the situation and past discourse. Researchers in generation try to identify the factors involved in this process and determine how best to represent the factors and their dependencies."
Poland has played an enormous role in the development of mathematical logic. Leading Polish logicians, like Lesniewski, Lukasiewicz and Tarski, produced several works related to philosophical logic, a field covering different topics relevant to philosophical foundations of logic itself, as well as various individual sciences. This collection presents contemporary Polish work in philosophical logic which in many respects continue the Polish way of doing philosophical logic. This book will be of interest to logicians, mathematicians, philosophers, and linguists.
Intensional logic has emerged, since the 1960' s, as a powerful theoretical and practical tool in such diverse disciplines as computer science, artificial intelligence, linguistics, philosophy and even the foundations of mathematics. The present volume is a collection of carefully chosen papers, giving the reader a taste of the frontline state of research in intensional logics today. Most papers are representative of new ideas and/or new research themes. The collection would benefit the researcher as well as the student. This book is a most welcome addition to our series. The Editors CONTENTS PREFACE IX JOHAN VAN BENTHEM AND NATASHA ALECHINA Modal Quantification over Structured Domains PATRICK BLACKBURN AND WILFRIED MEYER-VIOL Modal Logic and Model-Theoretic Syntax 29 RUY J. G. B. DE QUEIROZ AND DOV M. GABBAY The Functional Interpretation of Modal Necessity 61 VLADIMIR V. RYBAKOV Logics of Schemes for First-Order Theories and Poly-Modal Propositional Logic 93 JERRY SELIGMAN The Logic of Correct Description 107 DIMITER VAKARELOV Modal Logics of Arrows 137 HEINRICH WANSING A Full-Circle Theorem for Simple Tense Logic 173 MICHAEL ZAKHARYASCHEV Canonical Formulas for Modal and Superintuitionistic Logics: A Short Outline 195 EDWARD N. ZALTA 249 The Modal Object Calculus and its Interpretation NAME INDEX 281 SUBJECT INDEX 285 PREFACE Intensional logic has many faces. In this preface we identify some prominent ones without aiming at completeness.
Data-Driven Techniques in Speech Synthesis gives a first review of this new field. All areas of speech synthesis from text are covered, including text analysis, letter-to-sound conversion, prosodic marking and extraction of parameters to drive synthesis hardware. Fuelled by cheap computer processing and memory, the fields of machine learning in particular and artificial intelligence in general are increasingly exploiting approaches in which large databases act as implicit knowledge sources, rather than explicit rules manually written by experts. Speech synthesis is one application area where the new approach is proving powerfully effective, the reliance upon fragile specialist knowledge having hindered its development in the past. This book provides the first review of the new topic, with contributions from leading international experts. Data-Driven Techniques in Speech Synthesis is at the leading edge of current research, written by well respected experts in the field. The text is concise and accessible, and guides the reader through the new technology. The book will primarily appeal to research engineers and scientists working in the area of speech synthesis. However, it will also be of interest to speech scientists and phoneticians as well as managers and project leaders in the telecommunications industry who need an appreciation of the capabilities and potential of modern speech synthesis technology.
This series will include monographs and collections of studies devoted to the investigation and exploration of knowledge, information, and data processing systems of all kinds, no matter whether human, (other) animal, or machine. Its scope is intended to span the full range of interests from classical problems in the philosophy of mind and philosophical psychol ogy through issues in cognitive psychology and sociobiology (concerning the mental capabilities of other species) to ideas related to artificial intelligence and computer science. While primary emphasis will be placed upon theoretical, conceptual, and epistemological aspects of these problems and domains, empirical, experimental, and methodological studies will also appear from time to time. The problems posed by metaphor and analogy are among the most challenging that confront the field of knowledge representation. In this study, Eileen Way has drawn upon the combined resources of philosophy, psychology, and computer science in developing a systematic and illuminating theoretical framework for understanding metaphors and analogies. While her work provides solutions to difficult problems of knowledge representation, it goes much further by investigating some of the most important philosophical assumptions that prevail within artificial intelligence today. By exposing the limitations inherent in the assumption that languages are both literal and truth-functional, she has advanced our grasp of the nature of language itself. J.R.F."
Corpus-based methods will be found at the heart of many language and speech processing systems. This book provides an in-depth introduction to these technologies through chapters describing basic statistical modeling techniques for language and speech, the use of Hidden Markov Models in continuous speech recognition, the development of dialogue systems, part-of-speech tagging and partial parsing, data-oriented parsing and n-gram language modeling. The book attempts to give both a clear overview of the main technologies used in language and speech processing, along with sufficient mathematics to understand the underlying principles. There is also an extensive bibliography to enable topics of interest to be pursued further. Overall, we believe that the book will give newcomers a solid introduction to the field and it will give existing practitioners a concise review of the principal technologies used in state-of-the-art language and speech processing systems. Corpus-Based Methods in Language and Speech Processing is an initiative of ELSNET, the European Network in Language and Speech. In its activities, ELSNET attaches great importance to the integration of language and speech, both in research and in education. The need for and the potential of this integration are well demonstrated by this publication.
Parsing technology is concerned with finding syntactic structure in language. In parsing we have to deal with incomplete and not necessarily accurate formal descriptions of natural languages. Robustness and efficiency are among the main issuesin parsing. Corpora can be used to obtain frequency information about language use. This allows probabilistic parsing, an approach that aims at both robustness and efficiency increase. Approximation techniques, to be applied at the level of language description, parsing strategy, and syntactic representation, have the same objective. Approximation at the level of syntactic representation is also known as underspecification, a traditional technique to deal with syntactic ambiguity. In this book new parsing technologies are collected that aim at attacking the problems of robustness and efficiency by exactly these techniques: the design of probabilistic grammars and efficient probabilistic parsing algorithms, approximation techniques applied to grammars and parsers to increase parsing efficiency, and techniques for underspecification and the integration of semantic information in the syntactic analysis to deal with massive ambiguity. The book gives a state-of-the-art overview of current research and development in parsing technologies. In its chapters we see how probabilistic methods have entered the toolbox of computational linguistics in order to be applied in both parsing theory and parsing practice. The book is both a unique reference for researchers and an introduction to the field for interested graduate students.
This book is based on contributions to the Seventh European Summer School on Language and Speech Communication that was held at KTH in Stockholm, Sweden, in July of 1999 under the auspices of the European Language and Speech Network (ELSNET). The topic of the summer school was "Multimodality in Language and Speech Systems" (MiLaSS). The issue of multimodality in interpersonal, face-to-face communication has been an important research topic for a number of years. With the increasing sophistication of computer-based interactive systems using language and speech, the topic of multimodal interaction has received renewed interest both in terms of human-human interaction and human-machine interaction. Nine lecturers contri buted to the summer school with courses on specialized topics ranging from the technology and science of creating talking faces to human-human communication, which is mediated by computer for the handicapped. Eight of the nine lecturers are represented in this book. The summer school attracted more than 60 participants from Europe, Asia and North America representing not only graduate students but also senior researchers from both academia and industry." |
![]() ![]() You may like...
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,740
Discovery Miles 47 400
The Art and Science of Machine…
Walker H. Land Jr., J. David Schaffer
Hardcover
R4,471
Discovery Miles 44 710
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
|