![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This is a book about semantic theories of modality. Its main goal
is to explain and evaluate important contemporary theories within
linguistics and to discuss a wide range of linguistic phenomena
from the perspective of these theories. The introduction describes
the variety of grammatical phenomena associated with modality,
explaining why modal verbs, adjectives, and adverbs represent the
core phenomena. Chapters are then devoted to the possible worlds
semantics for modality developed in modal logic; current theories
of modal semantics within linguistics; and the most important
empirical areas of research. The author concludes by discussing the
relation between modality and other topics, especially tense,
aspect, mood, and discourse meaning.
The book provides an overview of more than a decade of joint R&D efforts in the Low Countries on HLT for Dutch. It not only presents the state of the art of HLT for Dutch in the areas covered, but, even more importantly, a description of the resources (data and tools) for Dutch that have been created are now available for both academia and industry worldwide. The contributions cover many areas of human language technology (for Dutch): corpus collection (including IPR issues) and building (in particular one corpus aiming at a collection of 500M word tokens), lexicology, anaphora resolution, a semantic network, parsing technology, speech recognition, machine translation, text (summaries) generation, web mining, information extraction, and text to speech to name the most important ones. The book also shows how a medium-sized language community (spanning two territories) can create a digital language infrastructure (resources, tools, etc.) as a basis for subsequent R&D. At the same time, it bundles contributions of almost all the HLT research groups in Flanders and the Netherlands, hence offers a view of their recent research activities. Targeted readers are mainly researchers in human language technology, in particular those focusing on Dutch. It concerns researchers active in larger networks such as the CLARIN, META-NET, FLaReNet and participating in conferences such as ACL, EACL, NAACL, COLING, RANLP, CICling, LREC, CLIN and DIR ( both in the Low Countries), InterSpeech, ASRU, ICASSP, ISCA, EUSIPCO, CLEF, TREC, etc. In addition, some chapters are interesting for human language technology policy makers and even for science policy makers in general. "
This book provides an in-depth view of the current issues, problems and approaches in the computation of meaning as expressed in language. Aimed at linguists, computer scientists, and logicians with an interest in the computation of meaning, this book focuses on two main topics in recent research in computational semantics. The first topic is the definition and use of underspecified semantic representations, i.e. formal structures that represent part of the meaning of a linguistic object while leaving other parts unspecified. The second topic discussed is semantic annotation. Annotated corpora have become an indispensable resource both for linguists and for developers of language and speech technology, especially when used in combination with machine learning methods. The annotation in corpora has only marginally addressed semantic information, however, since semantic annotation methodologies are still in their infancy. This book discusses the development and application of such methodologies.
This book introduces formal semantics techniques for a natural language processing audience. Methods discussed involve: (i) the denotational techniques used in model-theoretic semantics, which make it possible to determine whether a linguistic expression is true or false with respect to some model of the way things happen to be; and (ii) stages of interpretation, i.e., ways to arrive at meanings by evaluating and converting source linguistic expressions, possibly with respect to contexts, into output (logical) forms that could be used with (i). The book demonstrates that the methods allow wide coverage without compromising the quality of semantic analysis. Access to unrestricted, robust and accurate semantic analysis is widely regarded as an essential component for improving natural language processing tasks, such as: recognizing textual entailment, information extraction, summarization, automatic reply, and machine translation.
The use of literature in second language teaching has been advocated for a number of years, yet despite this there have only been a limited number of studies which have sought to investigate its effects. Fewer still have focused on its potential effects as a model of spoken language or as a vehicle to develop speaking skills. Drawing upon multiple research studies, this volume fills that gap to explore how literature is used to develop speaking skills in second language learners. The volume is divided into two sections: literature and spoken language and literature and speaking skills. The first section focuses on studies exploring the use of literature to raise awareness of spoken language features, whilst the second investigates its potential as a vehicle to develop speaking skills. Each section contains studies with different designs and in various contexts including China, Japan and the UK. The research designs used mean that the chapters contain clear implications for classroom pedagogy and research in different contexts.
In this pioneering book Katarzyna Jaszczolt lays down the
foundations of an original theory of meaning in discourse, reveals
the cognitive foundations of discourse interpretation, and puts
forward a new basis for the analysis of discourse processing. She
provides a step-by-step introduction to the theory and its
application, and explains new terms and formalisms as required. Dr.
Jaszczolt unites the precision of truth-conditional, dynamic
approaches with insights from neo-Gricean pragmatics into the role
of speaker's intentions in communication. She shows that the
compositionality of meaning may be understood as merger
representations combining information from various sources
including word meaning and sentence structure, various kinds of
default interpretations, and conscious pragmatic inference.
This book constitutes the refereed proceedings of the 7th International Conference on Computational Linguistics and Intelligent Text Processing, held in February 2006. The 43 revised full papers and 16 revised short papers presented together with three invited papers were carefully reviewed and selected from 176 submissions. The papers are structured into two parts and organized in topical sections on computational linguistics research.
This reader collects and introduces important work in linguistics, computer science, artificial intelligence, and computational linguistics on the use of linguistic devices in natural languages to situate events in time: whether they are past, present, or future; whether they are real or hypothetical; when an event might have occurred, and how long it could have lasted. In focussing on the treatment and retrieval of time-based information it seeks to lay the foundation for temporally-aware natural language computer processing systems, for example those that process documents on the worldwide web to answer questions or produce summaries. The development of such systems requires the application of technical knowledge from many different disciplines. The book is the first to bring these disciplines together, by means of classic and contemporary papers in four areas: tense, aspect, and event structure; temporal reasoning; the temporal structure of natural language discourse; and temporal annotation. Clear, self-contained editorial introductions to each area provide the necessary technical background for the non-specialist, explaining the underlying connections across disciplines. A wide range of students and professionals in academia and industry will value this book as an introduction and guide to a new and vital technology. The former include researchers, students, and teachers of natural language processing, linguistics, artificial intelligence, computational linguistics, computer science, information retrieval (including the growing speciality of question-answering), library sciences, human-computer interaction, and cognitive science. Those in industry include corporate managers and researchers, software product developers, and engineers in information-intensive companies, such as on-line database and web-service providers.
The structure and properties of any natural language expression depend on its component sub-expressions - "resources" - and relations among them that are sensitive to basic structural properties of order, grouping, and multiplicity. Resource-sensitivity thus provides a perspective on linguistic structure that is well-defined and universally-applicable. The papers in this collection - by J. van Benthem, P. Jacobson, G. JAger, G-J. Kruijff, G. Morrill, R. Muskens, R. Oehrle, and A. Szabolcsi - examine linguistic resources and resource-sensitivity from a variety of perspectives, including: - Modal aspects of categorial type inference; In particular, the book contains a number of papers treating anaphorically-dependent expressions as functions, whose application to an appropriate argument yields a type and an interpretation directly integratable with the surrounding grammatical structure. To situate this work in a larger setting, the book contains two appendices: - an introductory guide to resource-sensivity;
CICLing 2004 was the 5th Annual Conference on Intelligent Text Processing and Computational Linguistics; see www.CICLing.org. CICLing conferences are intended to provide a balanced view of the cutting-edge developments in both theoretical foundations of computational linguistics and the practice of natural language text processing with its numerous applications. A feature of CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. These conferences are a forum for dialogue between the specialists working in the two areas. This year we were honored by the presence of our invited speakers Martin KayofStanfordUniversity, PhilipResnikoftheUniversityofMaryland, Ricardo Baeza-Yates of the University of Chile, and Nick Campbell of the ATR Spoken Language Translation Research Laboratories. They delivered excellent extended lectures and organized vivid discussions. Of129submissionsreceived(74fullpapersand44shortpapers), aftercareful international reviewing 74 papers were selected for presentation (40 full papers and35shortpapers), writtenby176authorsfrom21countries: Korea(37), Spain (34), Japan (22), Mexico (15), China (11), Germany (10), Ireland (10), UK (10), Singapore (6), Canada (3), Czech Rep. (3), France (3), Brazil (2), Sweden (2), Taiwan (2), Turkey (2), USA (2), Chile (1), Romania (1), Thailand (1), and The Netherlands (1); the ?gures in parentheses stand for the number of authors from the corresponding co
CICLing2002wasthethirdannualConferenceonIntelligenttextprocessingand Computational Linguistics (hence the name CICLing); see www.CICLing.org. It was intended to provide a balanced view of the cutting edge developments in both theoretical foundations of computational linguistics and practice of natural language text processing with its numerous applications. A feature of CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. The c- ference is a forum for dialogue between the specialists working in these two areas. This year we were honored by the presence of our invited speakers Ni- lettaCalzolari (Inst. for Computational Linguistics, Italy), Ruslan Mitkov (U.of Wolverhampton, UK), Ivan Sag (Stanford U., USA), Yorick Wilks (U. of She?eld), and Antonio Zampolli (Inst. for Computational Linguistics, Italy). They delivered excellent extended lectures and organized vivid discussions. Of 67 submissions received, after careful reviewing 48 were selected for p- sentation; of them, 35 as full papers and 13 as short papers; by 98 authors from 19countries: Spain (18 authors), Mexico (13), Japan, UK (8each), Israel (7), Germany, Italy, USA (6each), Switzerland (5), Taiwan(4), Ireland (3), A- tralia, China, CzechRep., France, Russia (2each), Bulgaria, Poland, Romania (1 each).
This book constitutes the refereed proceedings of the 4th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing 2003, held in Mexico City, Mexico in February 2003. The 67 revised papers presented together with 4 keynote papers were carefully reviewed and selected from 92 submissions. The papers are organized in topical sections on computational linguistics formalisms; semantics and discourse; syntax and POS tagging; parsing techniques; morphology; word sense disambiguation; dictionary, lexicon, and ontology; corpus and language statistics; machine translation and bilingual corpora; text generation; natural language interfaces; speech processing; information retrieval and information extraction; text categorization and clustering; summarization; and spell-checking.
This book presents recent advances by leading researchers in computational modelling of language acquisition. The contributors, from departments of linguistics, cognitive science, psychology, and computer science, combine powerful computational techniques with real data and in doing so throw new light on the operations of the brain and the mind. They explore the extent to which linguistic structure is innate and/or available in a child's environment, and the degree to which language learning is inductive or deductive. They assess the explanatory power of different models. The book will appeal to all those working in language acquisition.
CICLing 2001 is the second annual Conference on Intelligent text processing and Computational Linguistics (hence the name CICLing), see www.CICLing.org. It is intended to provide a balanced view of the cutting edge developments in both theoretical foundations of computational linguistics and practice of natural language text processing with its numerous applications. A feature of the CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. The conference is a forum for dialogue between the specialists working in these two areas. This year our invited speakers were Graeme Hirst (U. Toronto, Canada), Sylvain Kahane (U. Paris 7, France), and Ruslan Mitkov (U. Wolverhampton, UK). They delivered excellent extended lectures and organized vivid discussions. A total of 72 submissions were received, all but very few of surprisingly high quality. After careful reviewing, the Program Committee selected for presentation 53 of them, 41 as full papers and 12 as short papers, by 98 authors from 19 countries: Spain (19 authors), Japan (15), USA (12), France, Mexico (9 each), Sweden (6), Canada, China, Germany, Italy, Malaysia, Russia, United Arab Emirates (3 each), Argentina (2), Bulgaria, The Netherlands, Ukraine, UK, and Uruguay (1 each).
This book constitutes the refereed proceedings of the 4th
International Conference on Text, Speech and Dialogue, TSD 2001,
held in Zelezna Ruda, Czech Republic in September 2001.
This book constitutes the refereed proceedings of the scientific
track of the 7th Congress of the Italian Association for Artificial
Intelligence, AI*IA 2001, held in Bari, Italy, in September
2001.
Anaphora is a central topic in syntax, semantics, and pragmatics and to the interface between them. It is the subject of advanced undergraduate and graduate courses in linguistics and computational linguistics. In this book, Yan Huang provides an extensive and accessible overview of the major contemporary issues surrounding anaphora and gives a critical survey of the many and diverse contemporary approaches to it. He also provides by far the fullest cross-linguistic account of anaphora yet published.
This innovative book develops a formal computational theory of writing systems and relates it to psycholinguistic results. Drawing on case studies of writing systems around the world, it offers specific proposals about the linguistic objects that are represented by orthographic elements and the formal constraints that hold of the mapping relation between them. Based on the insights gained, it posits a new taxonomy of writing systems. The book will be of interest to students and researchers in theoretical and computational linguistics, the psycholinguistics of reading and writing, and speech technology.
Understanding any communication depends on the listener or reader recognizing that some words refer to what has already been said or written (his, its, he, there, etc.). This mode of reference, anaphora, involves complicated cognitive and syntactic processes, which people usually perform unerringly, but which present formidable problems for the linguist and cognitive scientist trying to explain precisely how comprehension is achieved. Anaphora is thus a central research focus in syntactic and semantic theory, while understanding and modelling its operation in discourse are important targets in computational linguistics and cognitive science. Yan Huang provides an extensive and accessible overview of the major contemporary issues surrounding anaphora and gives a critical survey of the many and diverse contemporary approaches to it. He provides by far the fullest cross-linguistic account yet published: Dr Huang's survey and analysis are based on a rich collection of data drawn from around 450 of the world's languages.
One of the most hotly debated phenomena in natural language is that of leftward argument scrambling. This book investigates the properties of Hindi-Urdu scrambling to show that it must be analyzed as uniformly a focality-driven XP-adjunction operation. It proposes a novel theory of binding and coreference that not only derives the coreference effects in scrambled constructions, but has important consequences for the proper formulation of binding, crossover, reconstruction, and representational economy in the minimalist program. The book will be of interest not only to specialists in Hindi-Urdu syntax and/or scrambling, but to all students of generative syntax.
Information extraction (IE) is a new technology enabling relevant content to be extracted from textual information available electronically. IE essentially builds on natural language processing and computational linguistics, but it is also closely related to the well established area of information retrieval and involves learning. In concert with other promising intelligent information processing technologies like data mining, intelligent data analysis, text summarization, and information agents, IE plays a crucial role in dealing with the vast amounts of information accessible electronically, for example from the Internet. The book is based on the Second International School on Information Extraction, SCIE-99, held in Frascati near Rome, Italy in June/July 1999.
This volume collects landmark research in a burgeoning field of visual analytics for linguistics, called LingVis. Combining linguistic data and linguistically oriented research questions with techniques and methodologies developed in the computer science fields of visual analytics and information visualization, LingVis is motivated by the growing need within linguistic research for dealing with large amounts of complex, multidimensional data sets. An innovative exploration into the future of LingVis in the digital age, this foundational book both provides a representation of the current state of the field and communicates its new possibilities for addressing complex linguistic questions across the larger linguistic community.
From tech giants to plucky startups, the world is full of companies boasting that they are on their way to replacing human interpreters, but are they right? Interpreters vs Machines offers a solid introduction to recent theory and research on human and machine interpreting, and then invites the reader to explore the future of interpreting. With a foreword by Dr Henry Liu, the 13th International Federation of Translators (FIT) President, and written by consultant interpreter and researcher Jonathan Downie, this book offers a unique combination of research and practical insight into the field of interpreting. Written in an innovative, accessible style with humorous touches and real-life case studies, this book is structured around the metaphor of playing and winning a computer game. It takes interpreters of all experience levels on a journey to better understand their own work, learn how computers attempt to interpret and explore possible futures for human interpreters. With five levels and split into 14 chapters, Interpreters vs Machines is key reading for all professional interpreters as well as students and researchers of Interpreting and Translation Studies, and those with an interest in machine interpreting.
The goal of this book is to integrate the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. Rarely do these two camps meet and discuss the demands and concerns of each other's fields. Therefore, this book is interesting in that it provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions. This book grew out of the papers presented at a workshop held at Brandeis University in April, 1988, funded by the American Association for Artificial Intelligence. The entire workshop as well as the discussion periods accom panying each talk were recorded. Once complete copies of each paper were available, they were distributed to participants, who were asked to provide written comments on the texts for review purposes. VII JAMES PUSTEJOVSKY 1. INTRODUCTION There is currently a growing interest in the content of lexical entries from a theoretical perspective as well as a growing need to understand the organization of the lexicon from a computational view. This volume attempts to define the directions that need to be taken in order to achieve the goal of a coherent theory of lexical organization."
The lexicon is now a major focus of research in computational linguistics and natural language processing (NLP), as more linguistic theories concentrate on the lexicon and as the acquisition of an adequate vocabulary has become the chief bottleneck in developing practical NLP systems. This collection describes techniques of lexical representation within a unification-based framework and their linguistic application, concentrating on the issue of structuring the lexicon using inheritance and defaults. Topics covered include typed feature structures, default unification, lexical rules, multiple inheritance and non-monotonic reasoning. The contributions describe both theoretical results and implemented languages and systems, including DATR, the Stuttgart TFS and ISSCO's ELU. This book arose out of a workshop on default inheritance in the lexicon organized as a part of the Esprit ACQUILEX project on computational lexicography. Besides the contributed papers mentioned above, it contains a detailed description of the ACQUILEX lexical knowledge base (LKB) system and its use in the representation of lexicons extracted semi-automatically from machine-readable dictionaries. |
You may like...
The Art and Science of Machine…
Walker H. Land Jr., J. David Schaffer
Hardcover
R4,039
Discovery Miles 40 390
From Data to Evidence in English…
Carla Suhr, Terttu Nevalainen, …
Hardcover
R3,929
Discovery Miles 39 290
Artificial Intelligence for Healthcare…
Boris Galitsky, Saveli Goldberg
Paperback
R2,991
Discovery Miles 29 910
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
R884
Discovery Miles 8 840
Topics in Grammatical Inference
Jeffrey Heinz, Jose M. Sempere
Hardcover
R3,367
Discovery Miles 33 670
|