![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
Parsing Efficiency is crucial when building practical natural language systems. 'Ibis is especially the case for interactive systems such as natural language database access, interfaces to expert systems and interactive machine translation. Despite its importance, parsing efficiency has received little attention in the area of natural language processing. In the areas of compiler design and theoretical computer science, on the other hand, parsing algorithms 3 have been evaluated primarily in terms of the theoretical worst case analysis (e.g. lXn", and very few practical comparisons have been made. This book introduces a context-free parsing algorithm that parses natural language more efficiently than any other existing parsing algorithms in practice. Its feasibility for use in practical systems is being proven in its application to Japanese language interface at Carnegie Group Inc., and to the continuous speech recognition project at Carnegie-Mellon University. This work was done while I was pursuing a Ph.D degree at Carnegie-Mellon University. My advisers, Herb Simon and Jaime Carbonell, deserve many thanks for their unfailing support, advice and encouragement during my graduate studies. I would like to thank Phil Hayes and Ralph Grishman for their helpful comments and criticism that in many ways improved the quality of this book. I wish also to thank Steven Brooks for insightful comments on theoretical aspects of the book (chapter 4, appendices A, B and C), and Rich Thomason for improving the linguistic part of tile book (the very beginning of section 1.1).
Computational Models of Mixed-Initiative Interaction brings together research that spans several disciplines related to artificial intelligence, including natural language processing, information retrieval, machine learning, planning, and computer-aided instruction, to account for the role that mixed initiative plays in the design of intelligent systems. The ten contributions address the single issue of how control of an interaction should be managed when abilities needed to solve a problem are distributed among collaborating agents. Managing control of an interaction among humans and computers to gather and assemble knowledge and expertise is a major challenge that must be met to develop machines that effectively collaborate with humans. This is the first collection to specifically address this issue.
1. Metaphors and Logic Metaphors are among the most vigorous offspring of the creative mind; but their vitality springs from the fact that they are logical organisms in the ecology of l- guage. I aim to use logical techniques to analyze the meanings of metaphors. My goal here is to show how contemporary formal semantics can be extended to handle metaphorical utterances. What distinguishes this work is that it focuses intensely on the logical aspects of metaphors. I stress the role of logic in the generation and int- pretation of metaphors. While I don't presuppose any formal training in logic, some familiarity with philosophical logic (the propositional calculus and the predicate c- culus) is helpful. Since my theory makes great use of the notion of structure, I refer to it as the structural theory of m etaphor (STM). STM is a semant ic theory of m etaphor : if STM is correct, then metaphors are cognitively meaningful and are n- trivially logically linked with truth. I aim to extend possible worlds semantics to handle metaphors. I'll argue that some sentences in natural languages like English have multiple meanings: "Juliet is the sun" has (at least) two meanings: the literal meaning "(Juliet is the sunkIT" and the metaphorical meaning "(Juliet is the sun)MET". Each meaning is a function from (possible) worlds to truth-values. I deny that these functions are identical; I deny that the metaphorical function is necessarily false or necessarily true.
This volume is a selection of papers presented at a workshop entitled Predicative Forms in Natural Language and in Lexical Knowledge Bases organized in Toulouse in August 1996. A predicate is a named relation that exists among one or more arguments. In natural language, predicates are realized as verbs, prepositions, nouns and adjectives, to cite the most frequent ones. Research on the identification, organization, and semantic representa tion of predicates in artificial intelligence and in language processing is a very active research field. The emergence of new paradigms in theoretical language processing, the definition of new problems and the important evol ution of applications have, in fact, stimulated much interest and debate on the role and nature of predicates in naturallangage. From a broad theoret ical perspective, the notion of predicate is central to research on the syntax semantics interface, the generative lexicon, the definition of ontology-based semantic representations, and the formation of verb semantic classes. From a computational perspective, the notion of predicate plays a cent ral role in a number of applications including the design of lexical knowledge bases, the development of automatic indexing systems for the extraction of structured semantic representations, and the creation of interlingual forms in machine translation."
Most of the books about computational (lexical) semantic lexicons deal with the depth (or content) aspect of lexicons, ignoring the breadth (or coverage) aspect. This book presents a first attempt in the community to address both issues: content and coverage of computational semantic lexicons, in a thorough manner. Moreover, it addresses issues which have not yet been tackled in implemented systems such as the application time of lexical rules. Lexical rules and lexical underspecification are also contrasted in implemented systems. The main approaches in the field of computational (lexical) semantics are represented in the present book (including Wordnet, CyC, Mikrokosmos, Generative Lexicon). This book embraces several fields (and subfields) as different as: linguistics (theoretical, computational, semantics, pragmatics), psycholinguistics, cognitive science, computer science, artificial intelligence, knowledge representation, statistics and natural language processing. The book also constitutes a very good introduction to the state of the art in computational semantic lexicons of the late 1990s.
One of the aims of Natural Language Processing is to facilitate .the use of computers by allowing their users to communicate in natural language. There are two important aspects to person-machine communication: understanding and generating. While natural language understanding has been a major focus of research, natural language generation is a relatively new and increasingly active field of research. This book presents an overview of the state of the art in natural language generation, describing both new results and directions for new research. The principal emphasis of natural language generation is not only to facili tate the use of computers but also to develop a computational theory of human language ability. In doing so, it is a tool for extending, clarifying and verifying theories that have been put forth in linguistics, psychology and sociology about how people communicate. A natural language generator will typically have access to a large body of knowledge from which to select information to present to users as well as numer of expressing it. Generating a text can thus be seen as a problem of ous ways decision-making under multiple constraints: constraints from the propositional knowledge at hand, from the linguistic tools available, from the communicative goals and intentions to be achieved, from the audience the text is aimed at and from the situation and past discourse. Researchers in generation try to identify the factors involved in this process and determine how best to represent the factors and their dependencies."
It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then. The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as well as to consumers of logic in many applied areas. The main logic article in the Encyclopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good! The first edition was the second handbook published for the logic com- nity. It followed the North Holland one volume Handbook of Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook of Philosophical Logic, published 1983-1989 came at a fortunate temporal junction at the evolution of logic. This was the time when logic was gaining ground in computer science and artificial intelligence circles. These areas were under increasing commercial pressure to provide devices which help and/or replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organi- tion on the one hand and to provide the theoretical basis for the computer program constructs on the other.
This series will include monographs and collections of studies devoted to the investigation and exploration of knowledge, information, and data processing systems of all kinds, no matter whether human, (other) animal, or machine. Its scope is intended to span the full range of interests from classical problems in the philosophy of mind and philosophical psychol ogy through issues in cognitive psychology and sociobiology (concerning the mental capabilities of other species) to ideas related to artificial intelligence and computer science. While primary emphasis will be placed upon theoretical, conceptual, and epistemological aspects of these problems and domains, empirical, experimental, and methodological studies will also appear from time to time. The problems posed by metaphor and analogy are among the most challenging that confront the field of knowledge representation. In this study, Eileen Way has drawn upon the combined resources of philosophy, psychology, and computer science in developing a systematic and illuminating theoretical framework for understanding metaphors and analogies. While her work provides solutions to difficult problems of knowledge representation, it goes much further by investigating some of the most important philosophical assumptions that prevail within artificial intelligence today. By exposing the limitations inherent in the assumption that languages are both literal and truth-functional, she has advanced our grasp of the nature of language itself. J.R.F."
Intensional logic has emerged, since the 1960' s, as a powerful theoretical and practical tool in such diverse disciplines as computer science, artificial intelligence, linguistics, philosophy and even the foundations of mathematics. The present volume is a collection of carefully chosen papers, giving the reader a taste of the frontline state of research in intensional logics today. Most papers are representative of new ideas and/or new research themes. The collection would benefit the researcher as well as the student. This book is a most welcome addition to our series. The Editors CONTENTS PREFACE IX JOHAN VAN BENTHEM AND NATASHA ALECHINA Modal Quantification over Structured Domains PATRICK BLACKBURN AND WILFRIED MEYER-VIOL Modal Logic and Model-Theoretic Syntax 29 RUY J. G. B. DE QUEIROZ AND DOV M. GABBAY The Functional Interpretation of Modal Necessity 61 VLADIMIR V. RYBAKOV Logics of Schemes for First-Order Theories and Poly-Modal Propositional Logic 93 JERRY SELIGMAN The Logic of Correct Description 107 DIMITER VAKARELOV Modal Logics of Arrows 137 HEINRICH WANSING A Full-Circle Theorem for Simple Tense Logic 173 MICHAEL ZAKHARYASCHEV Canonical Formulas for Modal and Superintuitionistic Logics: A Short Outline 195 EDWARD N. ZALTA 249 The Modal Object Calculus and its Interpretation NAME INDEX 281 SUBJECT INDEX 285 PREFACE Intensional logic has many faces. In this preface we identify some prominent ones without aiming at completeness.
Data-Driven Techniques in Speech Synthesis gives a first review of this new field. All areas of speech synthesis from text are covered, including text analysis, letter-to-sound conversion, prosodic marking and extraction of parameters to drive synthesis hardware. Fuelled by cheap computer processing and memory, the fields of machine learning in particular and artificial intelligence in general are increasingly exploiting approaches in which large databases act as implicit knowledge sources, rather than explicit rules manually written by experts. Speech synthesis is one application area where the new approach is proving powerfully effective, the reliance upon fragile specialist knowledge having hindered its development in the past. This book provides the first review of the new topic, with contributions from leading international experts. Data-Driven Techniques in Speech Synthesis is at the leading edge of current research, written by well respected experts in the field. The text is concise and accessible, and guides the reader through the new technology. The book will primarily appeal to research engineers and scientists working in the area of speech synthesis. However, it will also be of interest to speech scientists and phoneticians as well as managers and project leaders in the telecommunications industry who need an appreciation of the capabilities and potential of modern speech synthesis technology.
It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then. The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as weIl as to consumers of logic in many applied areas. The main logic artiele in the Encyelopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good. ! The first edition was the second handbook published for the logic commu nity. It followed the North Holland one volume Handbook 0/ Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook 0/ Philosophical Logic, published 1983-1989 came at a fortunate at the evolution of logic. This was the time when logic temporal junction was gaining ground in computer science and artificial intelligence cireles. These areas were under increasing commercial pressure to provide devices which help andjor replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organisa tion on the one hand and to provide the theoretical basis for the computer program constructs on the other.
This book is based on contributions to the Seventh European Summer School on Language and Speech Communication that was held at KTH in Stockholm, Sweden, in July of 1999 under the auspices of the European Language and Speech Network (ELSNET). The topic of the summer school was "Multimodality in Language and Speech Systems" (MiLaSS). The issue of multimodality in interpersonal, face-to-face communication has been an important research topic for a number of years. With the increasing sophistication of computer-based interactive systems using language and speech, the topic of multimodal interaction has received renewed interest both in terms of human-human interaction and human-machine interaction. Nine lecturers contri buted to the summer school with courses on specialized topics ranging from the technology and science of creating talking faces to human-human communication, which is mediated by computer for the handicapped. Eight of the nine lecturers are represented in this book. The summer school attracted more than 60 participants from Europe, Asia and North America representing not only graduate students but also senior researchers from both academia and industry."
This book celebrates the work of Yorick Wilks in the form of a selection of his papers which are intended to reflect the range and depth of his work. The volume accompanies a Festschrift which celebrates his contribution to the fields of Computational Linguistics and Artificial Intelligence. The selected papers reflect Yorick's contribution to both practical and theoretical aspects of automatic language processing.
Web Personalization can be de?ned as any set of actions that can tailor the Webexperiencetoaparticularuserorsetofusers. Toachievee?ectivepers- alization, organizationsmustrelyonallavailabledata, includingtheusageand click-stream data (re?ecting user behaviour), the site content, the site str- ture, domainknowledge, aswellasuserdemographicsandpro?les. Inaddition, e?cient and intelligent techniques are needed to mine this data for actionable knowledge, and to e?ectively use the discovered knowledge to enhance the users' Web experience. These techniques must address important challenges emanating from the size and the heterogeneous nature of the data itself, as wellasthedynamicnatureofuserinteractionswiththeWeb. Thesechallenges include the scalability of the personalization solutions, data integration, and successful integration of techniques from machine learning, information - trievaland?ltering, databases, agentarchitectures, knowledgerepresentation, data mining, text mining, statistics, user modelling and human-computer - teraction. The Semantic Web adds one more dimension to this. The workshop will focus on the semantic web approach to personalization and adaptation. The Web has been formed to be an integral part of numerous applications inwhichauserinteractswithaserviceprovider, productsellers, governmental organisations, friends and colleagues. Content and services are available at di?erent sources and places. Hence, Web applications need to combine all available knowledge in order to form personalized, user-friendly, and busine- optimal servi
In the last years, it was observed an increasing interest of computer scientists in the structure of biological molecules and the way how they can be manipulated in vitro in order to define theoretical models of computation based on genetic engineering tools. Along the same lines, a parallel interest is growing regarding the process of evolution of living organisms. Much of the current data for genomes are expressed in the form of maps which are now becoming available and permit the study of the evolution of organisms at the scale of genome for the first time. On the other hand, there is an active trend nowadays throughout the field of computational biology toward abstracted, hierarchical views of biological sequences, which is very much in the spirit of computational linguistics. In the last decades, results and methods in the field of formal language theory that might be applied to the description of biological sequences were pointed out.
In the fall of 1985 Carnegie Mellon University established a Department of Philosophy. The focus of the department is logic broadly conceived, philos- ophy of science, in particular of the social sciences, and linguistics. To mark the inauguration of the department, a daylong celebration was held on April 5, 1986. This celebration consisted of two keynote addresses by Patrick Sup- pes and Thomas Schwartz, seminars directed by members of the department, and a panel discussion on the computational model of mind moderated by Dana S. Scott. The various contributions, in modified and expanded form, are the core of this collection of essays, and they are, I believe, of more than parochial interest: they turn attention to substantive and reflective interdis- ciplinary work. The collection is divided into three parts. The first part gives perspec- tives (i) on general features of the interdisciplinary enterprise in philosophy (by Patrick Suppes, Thomas Schwartz, Herbert A. Simon, and Clark Gly- mour) , and (ii) on a particular topic that invites such interaction, namely computational models of the mind (with contributions by Gilbert Harman, John Haugeland, Jay McClelland, and Allen Newell). The second part con- tains (mostly informal) reports on concrete research done within that enter- prise; the research topics range from decision theory and the philosophy of economics through foundational problems in mathematics to issues in aes- thetics and computational linguistics. The third part is a postscriptum by Isaac Levi, analyzing directions of (computational) work from his perspective.
This book describes the framework of inductive dependency parsing, a methodology for robust and efficient syntactic analysis of unrestricted natural language text. Coverage includes a theoretical analysis of central models and algorithms, and an empirical evaluation of memory-based dependency parsing using data from Swedish and English. A one-stop reference to dependency-based parsing of natural language, it will interest researchers and system developers in language technology, and is suitable for graduate or advanced undergraduate courses.
This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "Commonsense Reasoning with Verbs" (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations."
A history of machine translation (MT) from the point of view of a major writer and innovator in the field is the subject of this book. It details the deep differences between rival groups on how best to do MT, and presents a global perspective covering historical and contemporary systems in Europe, the US and Japan. The author considers MT as a fundamental part of Artificial Intelligence and the ultimate test-bed for all computational linguistics.
1. 1 OBJECTIVES The main objective of this joint work is to bring together some ideas that have played central roles in two disparate theoretical traditions in order to con tribute to a better understanding of the relationship between focus and the syn tactic and semantic structure of sentences. Within the Prague School tradition and the branch of its contemporary development represented by Hajicova and Sgall (HS in the sequel), topic-focus articulation has long been a central object of study, and it has long been a tenet of Prague school linguistics that topic-focus structure has systematic relevance to meaning. Within the formal semantics tradition represented by Partee (BHP in the sequel), focus has much more recently become an area of concerted investigation, but a number of the semantic phenomena to which focus is relevant have been extensively investi gated and given explicit compositional semantic-analyses. The emergence of 'tripartite structures' (see Chapter 2) in formal semantics and the partial simi larities that can be readily observed between some aspects of tripartite structures and some aspects of Praguian topic-focus articulation have led us to expect that a closer investigation of the similarities and differences in these different theoretical constructs would be a rewarding undertaking with mutual benefits for the further development of our respective theories and potential benefit for the study of semantic effects of focus in other theories as well."
Researchers in a number of disciplines deal with large text sets requiring both text management and text analysis. Faced with a large amount of textual data collected in marketing surveys, literary investigations, historical archives and documentary data bases, these researchers require assistance with organizing, describing and comparing texts. Exploring Textual Data demonstrates how exploratory multivariate statistical methods such as correspondence analysis and cluster analysis can be used to help investigate, assimilate and evaluate textual data. The main text does not contain any strictly mathematical demonstrations, making it accessible to a large audience. This book is very user-friendly with proofs abstracted in the appendices. Full definitions of concepts, implementations of procedures and rules for reading and interpreting results are fully explored. A succession of examples is intended to allow the reader to appreciate the variety of actual and potential applications and the complementary processing methods. A glossary of terms is provided.
This volume contains a selection of papers presented at a Seminar on Intensional Logic held at the University of Amsterdam during the period September 1990-May 1991. Modal logic, either as a topic or as a tool, is common to most of the papers in this volume. A number of the papers are con cerned with what may be called well-known or traditional modal systems, but, as a quick glance through this volume will reveal, this by no means implies that they walk the beaten tracks. In deed, such contributions display new directions, new results, and new techniques to obtain familiar results. Other papers in this volume are representative examples of a current trend in modal logic: the study of extensions or adaptations of the standard sys tems that have been introduced to overcome various shortcomings of the latter, especially their limited expressive power. Finally, there is another major theme that can be discerned in the vol ume, a theme that may be described by the slogan 'representing changing information. ' Papers falling under this heading address long-standing issues in the area, or present a systematic approach, while a critical survey and a report contributing new techniques are also included. The bulk of the papers on pure modal logic deal with theoreti calor even foundational aspects of modal systems."
ABOUT THIS BOOK This book is intended for researchers who want to keep abreast of cur rent developments in corpus-based natural language processing. It is not meant as an introduction to this field; for readers who need one, several entry-level texts are available, including those of (Church and Mercer, 1993; Charniak, 1993; Jelinek, 1997). This book captures the essence of a series of highly successful work shops held in the last few years. The response in 1993 to the initial Workshop on Very Large Corpora (Columbus, Ohio) was so enthusias tic that we were encouraged to make it an annual event. The following year, we staged the Second Workshop on Very Large Corpora in Ky oto. As a way of managing these annual workshops, we then decided to register a special interest group called SIGDAT with the Association for Computational Linguistics. The demand for international forums on corpus-based NLP has been expanding so rapidly that in 1995 SIGDAT was led to organize not only the Third Workshop on Very Large Corpora (Cambridge, Mass. ) but also a complementary workshop entitled From Texts to Tags (Dublin). Obviously, the success of these workshops was in some measure a re flection of the growing popularity of corpus-based methods in the NLP community. But first and foremost, it was due to the fact that the work shops attracted so many high-quality papers."
The subject of the present inquiry is the approach-to-the-truth research, which started with the publication of Sir Karl Popper's Conjectures and Refutations. In the decade before this publication, Popper fiercely attacked the ideas of Rudolf Carnap about confirmation and induction; and ten years later, in the famous tenth chapter of Conjectures he introduced his own ideas about scientific progress and verisimilitude (cf. the quotation on page 6). Abhorring inductivism for its apprecia tion of logical weakness rather than strength, Popper tried to show that fallibilism could serve the purpose of approach to the truth. To substantiate this idea he formalized the common sense intuition about preferences, that is: B is to be preferred to A if B has more advantages andfewer drawbacks than A. In 1974, however, David Millerand Pavel Tichy proved that Popper's formal explication could not be used to compare false theories. Subsequently, many researchers proposed alternatives or tried to improve Popper's original definition."
Marcus Contextual Grammars is the first monograph to present a class of grammars introduced about three decades ago, based on the fundamental linguistic phenomenon of strings-contexts interplay (selection). Most of the theoretical results obtained so far about the many variants of contextual grammars are presented with emphasis on classes of questions with relevance for applications in the study of natural language syntax: generative powers, descriptive and computational complexity, automata recognition, semilinearity, structure of the generated strings, ambiguity, regulated rewriting, etc. Constant comparison with families of languages in the Chomsky hierarchy is made. Connections with non-linguistic areas are established, such as molecular computing. Audience: Researchers and students in theoretical computer science (formal language theory and automata theory), computational linguistics, mathematical methods in linguistics, and linguists interested in formal models of syntax. |
![]() ![]() You may like...
Microwave Active Circuit Analysis and…
Clive Poole, Izzat Darwazeh
Hardcover
Energy-Efficient Fault-Tolerant Systems
Jimson Mathew, Rishad A. Shafik, …
Hardcover
R5,095
Discovery Miles 50 950
Next-Generation High-Speed Satellite…
Pietro Nannipieri, Gianmarco Dinelli, …
Hardcover
R1,522
Discovery Miles 15 220
Advanced Multiphasing Switched-Capacitor…
Nicolas Butzen, Michiel Steyaert
Hardcover
R2,517
Discovery Miles 25 170
Introduction to Transients in Electrical…
Jose Carlos Goulart de Siqueira, Benedito Donizeti Bonatto
Hardcover
R1,548
Discovery Miles 15 480
|