Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 17 of 17 matches in All Departments
This collection of papers takes linguists to the leading edge of techniques in generative lexicon theory, the linguistic composition methodology that arose from the imperative to provide a compositional semantics for the contextual modifications in meaning that emerge in real linguistic usage. Today's growing shift towards distributed compositional analyses evinces the applicability of GL theory, and the contributions to this volume, presented at three international workshops (GL-2003, GL-2005 and GL-2007) address the relationship between compositionality in language and the mechanisms of selection in grammar that are necessary to maintain this property. The core unresolved issues in compositionality, relating to the interpretation of context and the mechanisms of selection, are treated from varying perspectives within GL theory, including its basic theoretical mechanisms and its analytical viewpoint on linguistic phenomena.
This book integrates the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. It provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions.
What is the lexicon, what does it contain, and how is it structured? What principles determine the functioning of the lexicon as a component of natural language grammar? What role does lexical information play in linguistic theory? This accessible introduction aims to answer these questions, and explores the relation of the lexicon to grammar as a whole. It includes a critical overview of major theoretical frameworks, and puts forward a unified treatment of lexical structure and design. The text can be used for introductory and advanced courses, and for courses that touch upon different aspects of the lexicon, such as lexical semantics, lexicography, syntax, general linguistics, computational lexicology and ontology design. The book provides students with a set of tools which will enable them to work with lexical data for all kinds of purposes, including an abundance of exercises and in-class activities designed to ensure that students are actively engaged with the content and effectively acquire the necessary knowledge and skills they need.
This is the first volume of a unique collection that brings together the best English-language problems created for students competing in the Computational Linguistics Olympiad. These problems are representative of the diverse areas presented in the competition and designed with three principles in mind: * To challenge the student analytically, without requiring any explicit knowledge or experience in linguistics or computer science; * To expose the student to the different kinds of reasoning required when encountering a new phenomenon in a language, both as a theoretical topic and as an applied problem; * To foster the natural curiosity students have about the workings of their own language, as well as to introduce them to the beauty and structure of other languages; * To learn about the models and techniques used by computers to understand human language. Aside from being a fun intellectual challenge, the Olympiad mimics the skills used by researchers and scholars in the field of computational linguistics. In an increasingly global economy where businesses operate across borders and languages, having a strong pool of computational linguists is a competitive advantage, and an important component to both security and growth in the 21st century. This collection of problems is a wonderful general introduction to the field of linguistics through the analytic problem solving technique. "A fantastic collection of problems for anyone who is curious about how human language works! These books take serious scientific questions and present them in a fun, accessible way. Readers exercise their logical thinking capabilities while learning about a wide range of human languages, linguistic phenomena, and computational models. " - Kevin Knight, USC Information Sciences Institute
This collection of papers takes linguists to the leading edge of techniques in generative lexicon theory, the linguistic composition methodology that arose from the imperative to provide a compositional semantics for the contextual modifications in meaning that emerge in real linguistic usage. Today's growing shift towards distributed compositional analyses evinces the applicability of GL theory, and the contributions to this volume, presented at three international workshops (GL-2003, GL-2005 and GL-2007) address the relationship between compositionality in language and the mechanisms of selection in grammar that are necessary to maintain this property. The core unresolved issues in compositionality, relating to the interpretation of context and the mechanisms of selection, are treated from varying perspectives within GL theory, including its basic theoretical mechanisms and its analytical viewpoint on linguistic phenomena.
The study of formal languages and of related families of automata has long been at the core of theoretical computer science. Until recently, the main reasons for this centrality were connected with the specification and analy sis of programming languages, which led naturally to the following ques tions. How might a grammar be written for such a language? How could we check whether a text were or were not a well-formed program generated by that grammar? How could we parse a program to provide the structural analysis needed by a compiler? How could we check for ambiguity to en sure that a program has a unique analysis to be passed to the computer? This focus on programming languages has now been broadened by the in creasing concern of computer scientists with designing interfaces which allow humans to communicate with computers in a natural language, at least concerning problems in some well-delimited domain of discourse. The necessary work in computational linguistics draws on studies both within linguistics (the analysis of human languages) and within artificial intelligence. The present volume is the first textbook to combine the topics of formal language theory traditionally taught in the context of program ming languages with an introduction to issues in computational linguistics. It is one of a series, The AKM Series in Theoretical Computer Science, designed to make key mathematical developments in computer science readily accessible to undergraduate and beginning graduate students."
This book integrates the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. It provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions.
This state-of-the-art survey comprises a selection of the material presented at the International Dagstuhl Seminar on Annotating, Extracting and Reasoning about Time and Events, held in Dagstuhl Castle, Germany, in April 2005. The seminar centered around an emerging de facto standard for time and event annotation: TimeML. The 9 papers included in the book constitute the thoroughly cross-reviewed and revised versions of selected summaries and findings presented and discussed at the seminar. The papers feature current research and discuss open problems concerning annotation, temporal reasoning, and event identification. The main concern is with the determination of the effectivity of the TimeML language for consistent annotation, the determination of the usefulness of such annotations for further processing, and the question as to which modifications should be applied to the standard to improve its convenience in applications such as question-answering and information retrieval.
The goal of this book is to integrate the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. Rarely do these two camps meet and discuss the demands and concerns of each other's fields. Therefore, this book is interesting in that it provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions. This book grew out of the papers presented at a workshop held at Brandeis University in April, 1988, funded by the American Association for Artificial Intelligence. The entire workshop as well as the discussion periods accom panying each talk were recorded. Once complete copies of each paper were available, they were distributed to participants, who were asked to provide written comments on the texts for review purposes. VII JAMES PUSTEJOVSKY 1. INTRODUCTION There is currently a growing interest in the content of lexical entries from a theoretical perspective as well as a growing need to understand the organization of the lexicon from a computational view. This volume attempts to define the directions that need to be taken in order to achieve the goal of a coherent theory of lexical organization."
Recent work on formal methods in computational lexical semantics has had theeffect of bringing many linguistic formalisms much closer to the knowledge representation languages used in artificial intelligence. Formalisms are now emerging which may be more expressive and formally better understood than many knowledge representation languages. The interests of computational linguists now extend to include such domains as commonsense knowledge, inheritance, default reasoning, collocational relations, and even domain knowledge. With such an extension of the normal purview of "linguistic" knowledge, one may question whether there is any logical justification for distinguishing between lexical semantics and commonsense reasoning. This volume explores the question from several methodologicaland theoretical perspectives. What emerges is a clear consensus that the notion of the lexicon and lexical knowledge assumed in earlier linguistic research is grossly inadequate and fails to address the deeper semantic issues required for natural language analysis.
Interpreting Motion presents an integrated perspective on how language structures constrain concepts of motion and how the world shapes the way motion is linguistically expressed. Natural language allows for efficient communication of elaborate descriptions of movement without requiring a precise specification of the motion. Interpreting Motion is the first book to analyze the semantics of motion expressions in terms of the formalisms of qualitative spatial reasoning. It shows how motion descriptions in language are mapped to trajectories of moving entities based on qualitative spatio-temporal relationships. The authors provide an extensive discussion of prior research on spatial prepositions and motion verbs, devoting chapters to the compositional semantics of motion sentences, the formal representations needed for computers to reason qualitatively about time, space, and motion, and the methodology for annotating corpora with linguistic information in order to train computer programs to reproduce the annotation. The applications they illustrate include route navigation, the mapping of travel narratives, question-answering, image and video tagging, and graphical rendering of scenes from textual descriptions. The book is written accessibly for a broad scientific audience of linguists, cognitive scientists, computer scientists, and those working in fields such as artificial intelligence and geographic information systems.
What is the lexicon, what does it contain, and how is it structured? What principles determine the functioning of the lexicon as a component of natural language grammar? What role does lexical information play in linguistic theory? This accessible introduction aims to answer these questions, and explores the relation of the lexicon to grammar as a whole. It includes a critical overview of major theoretical frameworks, and puts forward a unified treatment of lexical structure and design. The text can be used for introductory and advanced courses, and for courses that touch upon different aspects of the lexicon, such as lexical semantics, lexicography, syntax, general linguistics, computational lexicology and ontology design. The book provides students with a set of tools which will enable them to work with lexical data for all kinds of purposes, including an abundance of exercises and in-class activities designed to ensure that students are actively engaged with the content and effectively acquire the necessary knowledge and skills they need.
This reader collects and introduces important work in linguistics, computer science, artificial intelligence, and computational linguistics on the use of linguistic devices in natural languages to situate events in time: whether they are past, present, or future; whether they are real or hypothetical; when an event might have occurred, and how long it could have lasted. In focussing on the treatment and retrieval of time-based information it seeks to lay the foundation for temporally-aware natural language computer processing systems, for example those that process documents on the worldwide web to answer questions or produce summaries. The development of such systems requires the application of technical knowledge from many different disciplines. The book is the first to bring these disciplines together, by means of classic and contemporary papers in four areas: tense, aspect, and event structure; temporal reasoning; the temporal structure of natural language discourse; and temporal annotation. Clear, self-contained editorial introductions to each area provide the necessary technical background for the non-specialist, explaining the underlying connections across disciplines. A wide range of students and professionals in academia and industry will value this book as an introduction and guide to a new and vital technology. The former include researchers, students, and teachers of natural language processing, linguistics, artificial intelligence, computational linguistics, computer science, information retrieval (including the growing speciality of question-answering), library sciences, human-computer interaction, and cognitive science. Those in industry include corporate managers and researchers, software product developers, and engineers in information-intensive companies, such as on-line database and web-service providers.
Create your own natural language training corpus for machine learning. Whether you're working with English, Chinese, or any other natural language, this hands-on book guides you through a proven annotation development cycle--the process of adding metadata to your training corpus to help ML algorithms work more efficiently. You don't need any programming or linguistics experience to get started. Using detailed examples at every step, you'll learn how the "MATTER Annotation Development Process" helps you Model, Annotate, Train, Test, Evaluate, and Revise your training corpus. You also get a complete walkthrough of a real-world annotation project.Define a clear annotation goal before collecting your dataset (corpus)Learn tools for analyzing the linguistic content of your corpusBuild a model and specification for your annotation projectExamine the different annotation formats, from basic XML to the Linguistic Annotation FrameworkCreate a gold standard corpus that can be used to train and test ML algorithmsSelect the ML algorithms that will process your annotated dataEvaluate the test results and revise your annotation taskLearn how to use lightweight software for annotating texts and adjudicating the annotations This book is a perfect companion to O'Reilly's "Natural Language Processing with Python."
Lexical ambiguity is one of the most intractable problems facing language processing studies and is at the core of research in lexical semantics. The papers in this collection constitute not just a set of diverse yet related articles in this area of research, but rather make up a unique collection of work on the relationship between logical polysemy, sense extension, and discourse structure. Each paper addresses the following questions: what is the representation of a lexical item such that it may assume different senses? What is it about the representation of a lexical item that gives rise to sense extensions and to the phenomenon of logical polysemy? Three major subthemes run through the papers: the role of pragmatics and discourse in lexical disambiguation the analysis of logical polysemy as a compositional process and the treament of sense extension and referential transfer phenomena.
The first formally elaborated theory of a generative approach to word meaning, The Generative Lexicon lays the foundation for an implemented computational treatment of word meaning that connects explicitly to a compositional semantics. The Generative Lexicon presents a novel and exciting theory of lexical semantics that addresses the problem of the "multiplicity of word meaning"; that is, how we are able to give an infinite number of senses to words with finite means. The first formally elaborated theory of a generative approach to word meaning, it lays the foundation for an implemented computational treatment of word meaning that connects explicitly to a compositional semantics. In contrast to the static view of word meaning (where each word is characterized by a predetermined number of word senses) that imposes a tremendous bottleneck on the performance capability of any natural language processing system, Pustejovsky proposes that the lexicon becomes an active-and central-component in the linguistic description. The essence of his theory is that the lexicon functions generatively, first by providing a rich and expressive vocabulary for characterizing lexical information; then, by developing a framework for manipulating fine-grained distinctions in word descriptions; and finally, by formalizing a set of mechanisms for specialized composition of aspects of such descriptions of words, as they occur in context, extended and novel senses are generated. The subjects covered include semantics of nominals (figure/ground nominals, relational nominals, and other event nominals); the semantics of causation (in particular, how causation is lexicalized in language, including causative/unaccusatives, aspectual predicates, experiencer predicates, and modal causatives); how semantic types constrain syntactic expression (such as the behavior of type shifting and type coercion operations); a formal treatment of event semantics with subevents); and a general treatment of the problem of polysemy. Language, Speech, and Communication series
Researchers in lexical semantics, logical semantics, and syntax have traditionally employed different approaches in their study of natural languages. Yet, recent research in all three fields have demonstrated a growing recognition that the grammars of natural languages structure and refer to events in particular ways. This convergence on the theory of events as grammatical objects is the motivation for this volume, which brings together premiere researchers in these disciplines to specifically address the topic of event structure. The selection of works presented in this volume originated from a 1997 workshop funded by the National Science Foundation regarding Events as Grammatical Objects, from the Combined Perspectives of Lexical Semantics, Logical Semantics and Syntax.
|
You may like...
Labour Relations in South Africa
Dr Hanneli Bendeman, Dr Bronwyn Dworzanowski-Venter
Paperback
|