![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
Yorick Wilks is a central figure in the fields of Natural Language Processing and Artificial Intelligence. This book celebrates Wilks s career from the perspective of his peers in original chapters each of which analyses an aspect of his work and links it to current thinking in that area. This volume forms a two-part set together with Words and Intelligence I: Selected Works by Yorick Wilks, by the same editors."
ThisbookdiscusseshowTypeLogicalGrammarcanbemodi?edinsuch awaythatasystematictreatmentofanaphoraphenomenabecomesp- sible without giving up the general architecture of this framework. By Type Logical Grammar, I mean the version of Categorial Grammar that arose out of the work of Lambek, 1958 and Lambek, 1961. There Ca- gorial types are analyzed as formulae of a logical calculus. In particular, the Categorial slashes are interpreted as forms of constructive impli- tion in the sense of Intuitionistic Logic. Such a theory of grammar is per se attractive for a formal linguist who is interested in the interplay between formal logic and the structure of language. What makes L- bekstyleCategorialGrammarevenmoreexcitingisthefactthat(asvan Benthem,1983pointsout)theCurry-Howardcorrespondence-acentral part of mathematical proof theory which establishes a deep connection betweenconstructivelogicsandthe?-calculus-suppliesthetypelogical syntax with an extremely elegant and independently motivated interface to model-theoretic semantics. Prima facie, anaphora does not 't very well into the Categorial picture of the syntax-semantics interface. The Curry-Howard based composition of meaning operates in a local way, and meaning ass- bly is linear, i.e., every piece of lexical meaning is used exactly once. Anaphora, on the other hand, is in principle unbounded, and it involves by de?nition the multiple use of certain semantic resources. The latter problem has been tackled by several Categorial grammarians by ass- ing su?ciently complex lexical meanings for anaphoric expressions, but the locality problem is not easy to solve in a purely lexical way.
Computational semantics is the art and science of computing meaning in natural language. The meaning of a sentence is derived from the meanings of the individual words in it, and this process can be made so precise that it can be implemented on a computer. Designed for students of linguistics, computer science, logic and philosophy, this comprehensive text shows how to compute meaning using the functional programming language Haskell. It deals with both denotational meaning (where meaning comes from knowing the conditions of truth in situations), and operational meaning (where meaning is an instruction for performing cognitive action). Including a discussion of recent developments in logic, it will be invaluable to linguistics students wanting to apply logic to their studies, logic students wishing to learn how their subject can be applied to linguistics, and functional programmers interested in natural language processing as a new application area.
The ninth campaign of the Cross-Language Evaluation Forum (CLEF) for European languages was held from January to September 2008. There were seven main eval- tion tracks in CLEF 2008 plus two pilot tasks. The aim, as usual, was to test the p- formance of a wide range of multilingual information access (MLIA) systems or s- tem components. This year, 100 groups, mainly but not only from academia, parti- pated in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia plus a few participants from South America and Africa. Full details regarding the design of the tracks, the methodologies used for evaluation, and the results obtained by the participants can be found in the different sections of these proceedings. The results of the CLEF 2008 campaign were presented at a two-and-a-half day workshop held in Aarhus, Denmark, September 17-19, and attended by 150 resear- ers and system developers. The annual workshop, held in conjunction with the European Conference on Digital Libraries, plays an important role by providing the opportunity for all the groups that have participated in the evaluation campaign to get together comparing approaches and exchanging ideas. The schedule of the workshop was divided between plenary track overviews, and parallel, poster and breakout sessions presenting this year's experiments and discu- ing ideas for the future. There were several invited talks.
th TSD 2009was the 12 eventin the series of InternationalConferenceson Text, Speech andDialoguesupportedbytheInternationalSpeechCommunicationAssociation(ISCA) ? and Czech Society for Cybernetics and Informatics (CSKI). This year, TSD was held in Plzen ? (Pilsen), in the Primavera Conference Center, during September 13-17, 2009 and it was organized by the University of West Bohemia in Plzen ? in cooperation with Masaryk University of Brno, Czech Republic. Like its predecessors, TSD 2009 hi- lighted to both the academic and scienti?c world the importance of text and speech processing and its most recent breakthroughsin current applications. Both experienced researchers and professionals as well as newcomers to the text and speech processing ?eld, interested in designing or evaluating interactive software, developing new int- action technologies, or investigatingoverarchingtheories of text and speech processing found in the TSD conference a forum to communicate with people sharing similar - terests. The conference is an interdisciplinary forum, intertwining research in speech and language processing with its applications in everyday practice. We feel that the mixture of different approaches and applications offered a great opportunity to get - quaintedwith currentactivitiesin all aspects oflanguagecommunicationand to witness the amazing vitality of researchers from developing countries too. This year's conference was partially oriented toward semantic processing, which was chosen as the main topic of the conference. All invited speakers (Frederick Jelinek, Louise Guthrie, Roberto Pieraccini, Tilman Becker, and Elmar Not ] h) gave lectures on thenewestresultsintherelativelybroadandstillunexploredareaofsemanticprocessing."
The use of literature in second language teaching has been advocated for a number of years, yet despite this there have only been a limited number of studies which have sought to investigate its effects. Fewer still have focused on its potential effects as a model of spoken language or as a vehicle to develop speaking skills. Drawing upon multiple research studies, this volume fills that gap to explore how literature is used to develop speaking skills in second language learners. The volume is divided into two sections: literature and spoken language and literature and speaking skills. The first section focuses on studies exploring the use of literature to raise awareness of spoken language features, whilst the second investigates its potential as a vehicle to develop speaking skills. Each section contains studies with different designs and in various contexts including China, Japan and the UK. The research designs used mean that the chapters contain clear implications for classroom pedagogy and research in different contexts.
This book teaches the principles of natural language processing and covers linguistics issues. It also details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques. A key feature of the book is the author's hands-on approach throughout, with extensive exercises, sample code in Prolog and Perl, and a detailed introduction to Prolog. The book is suitable for researchers and students of natural language processing and computational linguistics.
The relation between ontologies and language is currently at the forefront of natural language processing (NLP). Ontologies, as widely used models in semantic technologies, have much in common with the lexicon. A lexicon organizes words as a conventional inventory of concepts, while an ontology formalizes concepts and their logical relations. A shared lexicon is the prerequisite for knowledge-sharing through language, and a shared ontology is the prerequisite for knowledge-sharing through information technology. In building models of language, computational linguists must be able to accurately map the relations between words and the concepts that they can be linked to. This book focuses on the technology involved in enabling integration between lexical resources and semantic technologies. It will be of interest to researchers and graduate students in NLP, computational linguistics, and knowledge engineering, as well as in semantics, psycholinguistics, lexicology and morphology/syntax.
This volume presents the proceedings of the Third International Sanskrit C- putational Linguistics Symposium hosted by the University of Hyderabad, Hyderabad, IndiaduringJanuary15-17,2009.TheseriesofsymposiaonSanskrit Computational Linguistics began in 2007. The ?rst symposium was hosted by INRIA atRocquencourt, Francein October 2007asa partofthe jointcollabo- tion between INRIA and the University of Hyderabad. This joint collaboration expanded both geographically as well as academically covering more facets of Sanskrit Computaional Linguistics, when the second symposium was hosted by Brown University, USA in May 2008. We received 16 submissions, which were reviewed by the members of the Program Committee. After discussion, nine of them were selected for presen- tion. These nine papers fall under four broad categories: four papers deal with the structure of Pan - ini's Astad - hyay - - ?. Two of them deal with parsing issues, . .. two with various aspects of machine translation, and the last one with the Web concordance of an important Sanskrit text. Ifwelookretrospectivelyoverthelasttwoyears, thethreesymposiainsucc- sion have seen not only continuity of some of the themes, but also steady growth of the community. As is evident, researchers from diverse disciplines such as l- guistics, computer science, philology, and vy- akarana are collaborating with the . scholars from other disciplines, witnessing the growth of Sanskrit computational linguistics as an emergent discipline. We are grateful to S.D. Joshi, Jan Houben, and K.V.R. Krishnamacharyulu for accepting our invitation to deliver the invited speeches."
The annual Text, Speech and Dialogue Conference (TSD), which originated in 1998, is now starting its second decade. So far almost 900 authors from 45 countries have contributed to the proceedings. TSD constitutes a recognizedplatform for the presen- tion and discussion of state-of-the-art technology and recent achievements in the ?eld of natural language processing. It has become an interdisciplinary forum, interweaving the themes of speech technology and language processing. The conference attracts - searchers not only from Central and Eastern Europe, but also from other parts of the world. Indeed, one of its goals has always been to bring together NLP researchers with different interests from different parts of the world and to promote their mutual co- eration. One of the ambitions of the conference is, as its title says, not only to deal with dialogue systems as such, but also to contribute to improving dialogue between researchers in the two areas of NLP, i. e., between text and speech people. In our view, the TSD conference was successful in this respect in 2008 as well. This volume contains the proceedings of the 11th TSD conference, held in Brno, Czech Republic in September 2008. Following the review process, 79 papers were - ceptedoutof173submitted, anacceptancerateof45. 7%.
The International Conference on Computational Processing on Portuguese, f- merly the Workshop on Computational Processing of the Portuguese Language - PROPOR- is the main event in the area of Natural LanguageProcessing that focusesonPortugueseandthe theoreticalandtechnologicalissuesrelatedto this speci?c language. The meeting has been a very rich forum for the interchange of ideas and partnerships for the researchcommunities dedicated to the automated processing of the Portuguese language. This year's PROPOR, the ?rst one to adopt the International Conference - bel, followedworkshopsheld in Lisbon, Portugal(1993), Curitiba, Brazil(1996), PortoAlegre, Brazil(1998), Evora, Portugal(1999), Atibaia, Brazil(2000), Faro, Portugal (2003) and Itatiaia, Brazil (2006). The constitutionofasteeringcommittee (PROPORCommittee), aninter- tional program committee, the adoption of high-standard refereing procedures and the support of the prestigious ACL and ISCA international associations demonstrate the steady development of the ?eld and of its scienti?c community. A total of 63 papers were submitted to PROPOR 2008. Each submitted paper received a careful, triple-blind review by the program committee or by their commitment. All those who contributed are mentioned on the following pages. The reviewing process led to the selection of 21 regular papers for oral presentation and 16 short papers for poster sessions. The workshop and this book were structured around the following main t- ics: Speech Analysis; Ontologies, Semantics and Anaphora Resolution; Speech Synthesis; Machine Learning Applied to Natural Language Processing; Speech Recognition and Natural Language Processing Tools and Applications. Short papers and related posters were organized according to the two main areas of PROPOR: Natural Language Processing and Speech Technology.
This book provides an in-depth view of the current issues, problems and approaches in the computation of meaning as expressed in language. Aimed at linguists, computer scientists, and logicians with an interest in the computation of meaning, this book focuses on two main topics in recent research in computational semantics. The first topic is the definition and use of underspecified semantic representations, i.e. formal structures that represent part of the meaning of a linguistic object while leaving other parts unspecified. The second topic discussed is semantic annotation. Annotated corpora have become an indispensable resource both for linguists and for developers of language and speech technology, especially when used in combination with machine learning methods. The annotation in corpora has only marginally addressed semantic information, however, since semantic annotation methodologies are still in their infancy. This book discusses the development and application of such methodologies.
This volume constitutes the thoroughly refereed post-conference proceedings of the First and Second International Symposia on Sanskrit Computational Linguistics, held in Rocquencourt, France, in October 2007 and in Providence, RI, USA, in May 2008 respectively. The 11 revised full papers of the first and the 12 revised papers of the second symposium presented with an introduction and a keynote talk were carefully reviewed and selected from the lectures given at both events. The papers address several topics such as the structure of the Paninian grammatical system, computational linguistics, lexicography, lexical databases, formal description of sanskrit grammar, phonology and morphology, machine translation, philology, and OCR.
In its nine chapters, this book provides an overview of the state-of-the-art and best practice in several sub-fields of evaluation of text and speech systems and components. The evaluation aspects covered include speech and speaker recognition, speech synthesis, animated talking agents, part-of-speech tagging, parsing, and natural language software like machine translation, information retrieval, question answering, spoken dialogue systems, data resources, and annotation schemes. With its broad coverage and original contributions this book is unique in the field of evaluation of speech and language technology. This book is of particular relevance to advanced undergraduate students, PhD students, academic and industrial researchers, and practitioners.
CICLing 2008 (www. CICLing. org) was the 9th Annual Conference on Intel- gent Text Processing and Computational Linguistics. The CICLing conferences are intended to provide a wide-scope forum for the discussion of both the art and craft of natural language processing research and the best practices in its applications. This volume contains the papers accepted for oral presentation at the c- ference, as well as several of the best papers accepted for poster presentation. Other papers accepted for poster presentationwerepublished in specialissues of other journals(seethe informationonthe website). Since 2001the CICLing p- ceedings have been published in Springer's Lecture Notes in Computer Science series, as volumes 2004, 2276, 2588, 2945, 3406, 3878, and 4394. The book consists of 12 sections, representative of the main tasks and app- cations of Natural Language Processing: - Language resources - Morphology and syntax - Semantics and discourse - Word sense disambiguation and named entity recognition - Anaphora and co-reference - Machine translation and parallel corpora - Natural language generation - Speech recognition - Information retrieval and question answering - Text classi?cation - Text summarization - Spell checking and authoring aid A total of 204 papers by 438 authors from 39 countries were submitted for evaluation (see Tables 1 and 2). Each submission was reviewed by at least two independent Program Committee members. This volume contains revised v- sions of 52 papers by 129 authors from 24 countries selected for inclusion in the conference program (the acceptance rate was 25. 5%).
In this pioneering book Katarzyna Jaszczolt lays down the
foundations of an original theory of meaning in discourse, reveals
the cognitive foundations of discourse interpretation, and puts
forward a new basis for the analysis of discourse processing. She
provides a step-by-step introduction to the theory and its
application, and explains new terms and formalisms as required. Dr.
Jaszczolt unites the precision of truth-conditional, dynamic
approaches with insights from neo-Gricean pragmatics into the role
of speaker's intentions in communication. She shows that the
compositionality of meaning may be understood as merger
representations combining information from various sources
including word meaning and sentence structure, various kinds of
default interpretations, and conscious pragmatic inference.
This book constitutes the thoroughly refereed post-proceedings of the Joint Chinese-German Workshop on Cognitive Systems held in Shanghai in March 2005. The 13 revised papers presented were carefully reviewed and selected from numerous submissions for inclusion in the book. The workshop served to present the current state of the art in the new transdiscipline of cognitive systems, which is emerging from computer science, the neurosciences, computational linguistics, neurological networks and the new philosophy of mind. The papers are organized in topical sections on multimodal human-computer interfaces, neuropsychology and neurocomputing, Chinese-German natural language processing and psycholinguistics, as well as information processing and retrieval from the semantic Web for intelligent applications.
This book constitutes the refereed proceedings of the 7th International Conference on Computational Linguistics and Intelligent Text Processing, held in February 2006. The 43 revised full papers and 16 revised short papers presented together with three invited papers were carefully reviewed and selected from 176 submissions. The papers are structured into two parts and organized in topical sections on computational linguistics research.
This reader collects and introduces important work in linguistics, computer science, artificial intelligence, and computational linguistics on the use of linguistic devices in natural languages to situate events in time: whether they are past, present, or future; whether they are real or hypothetical; when an event might have occurred, and how long it could have lasted. In focussing on the treatment and retrieval of time-based information it seeks to lay the foundation for temporally-aware natural language computer processing systems, for example those that process documents on the worldwide web to answer questions or produce summaries. The development of such systems requires the application of technical knowledge from many different disciplines. The book is the first to bring these disciplines together, by means of classic and contemporary papers in four areas: tense, aspect, and event structure; temporal reasoning; the temporal structure of natural language discourse; and temporal annotation. Clear, self-contained editorial introductions to each area provide the necessary technical background for the non-specialist, explaining the underlying connections across disciplines. A wide range of students and professionals in academia and industry will value this book as an introduction and guide to a new and vital technology. The former include researchers, students, and teachers of natural language processing, linguistics, artificial intelligence, computational linguistics, computer science, information retrieval (including the growing speciality of question-answering), library sciences, human-computer interaction, and cognitive science. Those in industry include corporate managers and researchers, software product developers, and engineers in information-intensive companies, such as on-line database and web-service providers.
The structure and properties of any natural language expression depend on its component sub-expressions - "resources" - and relations among them that are sensitive to basic structural properties of order, grouping, and multiplicity. Resource-sensitivity thus provides a perspective on linguistic structure that is well-defined and universally-applicable. The papers in this collection - by J. van Benthem, P. Jacobson, G. JAger, G-J. Kruijff, G. Morrill, R. Muskens, R. Oehrle, and A. Szabolcsi - examine linguistic resources and resource-sensitivity from a variety of perspectives, including: - Modal aspects of categorial type inference; In particular, the book contains a number of papers treating anaphorically-dependent expressions as functions, whose application to an appropriate argument yields a type and an interpretation directly integratable with the surrounding grammatical structure. To situate this work in a larger setting, the book contains two appendices: - an introductory guide to resource-sensivity;
CICLing 2004 was the 5th Annual Conference on Intelligent Text Processing and Computational Linguistics; see www.CICLing.org. CICLing conferences are intended to provide a balanced view of the cutting-edge developments in both theoretical foundations of computational linguistics and the practice of natural language text processing with its numerous applications. A feature of CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. These conferences are a forum for dialogue between the specialists working in the two areas. This year we were honored by the presence of our invited speakers Martin KayofStanfordUniversity, PhilipResnikoftheUniversityofMaryland, Ricardo Baeza-Yates of the University of Chile, and Nick Campbell of the ATR Spoken Language Translation Research Laboratories. They delivered excellent extended lectures and organized vivid discussions. Of129submissionsreceived(74fullpapersand44shortpapers), aftercareful international reviewing 74 papers were selected for presentation (40 full papers and35shortpapers), writtenby176authorsfrom21countries: Korea(37), Spain (34), Japan (22), Mexico (15), China (11), Germany (10), Ireland (10), UK (10), Singapore (6), Canada (3), Czech Rep. (3), France (3), Brazil (2), Sweden (2), Taiwan (2), Turkey (2), USA (2), Chile (1), Romania (1), Thailand (1), and The Netherlands (1); the ?gures in parentheses stand for the number of authors from the corresponding co
This book constitutes the refereed proceedings of the 4th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing 2003, held in Mexico City, Mexico in February 2003. The 67 revised papers presented together with 4 keynote papers were carefully reviewed and selected from 92 submissions. The papers are organized in topical sections on computational linguistics formalisms; semantics and discourse; syntax and POS tagging; parsing techniques; morphology; word sense disambiguation; dictionary, lexicon, and ontology; corpus and language statistics; machine translation and bilingual corpora; text generation; natural language interfaces; speech processing; information retrieval and information extraction; text categorization and clustering; summarization; and spell-checking.
CICLing2002wasthethirdannualConferenceonIntelligenttextprocessingand Computational Linguistics (hence the name CICLing); see www.CICLing.org. It was intended to provide a balanced view of the cutting edge developments in both theoretical foundations of computational linguistics and practice of natural language text processing with its numerous applications. A feature of CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. The c- ference is a forum for dialogue between the specialists working in these two areas. This year we were honored by the presence of our invited speakers Ni- lettaCalzolari (Inst. for Computational Linguistics, Italy), Ruslan Mitkov (U.of Wolverhampton, UK), Ivan Sag (Stanford U., USA), Yorick Wilks (U. of She?eld), and Antonio Zampolli (Inst. for Computational Linguistics, Italy). They delivered excellent extended lectures and organized vivid discussions. Of 67 submissions received, after careful reviewing 48 were selected for p- sentation; of them, 35 as full papers and 13 as short papers; by 98 authors from 19countries: Spain (18 authors), Mexico (13), Japan, UK (8each), Israel (7), Germany, Italy, USA (6each), Switzerland (5), Taiwan(4), Ireland (3), A- tralia, China, CzechRep., France, Russia (2each), Bulgaria, Poland, Romania (1 each).
This book constitutes the refereed proceedings of the scientific
track of the 7th Congress of the Italian Association for Artificial
Intelligence, AI*IA 2001, held in Bari, Italy, in September
2001.
CICLing 2001 is the second annual Conference on Intelligent text processing and Computational Linguistics (hence the name CICLing), see www.CICLing.org. It is intended to provide a balanced view of the cutting edge developments in both theoretical foundations of computational linguistics and practice of natural language text processing with its numerous applications. A feature of the CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. The conference is a forum for dialogue between the specialists working in these two areas. This year our invited speakers were Graeme Hirst (U. Toronto, Canada), Sylvain Kahane (U. Paris 7, France), and Ruslan Mitkov (U. Wolverhampton, UK). They delivered excellent extended lectures and organized vivid discussions. A total of 72 submissions were received, all but very few of surprisingly high quality. After careful reviewing, the Program Committee selected for presentation 53 of them, 41 as full papers and 12 as short papers, by 98 authors from 19 countries: Spain (19 authors), Japan (15), USA (12), France, Mexico (9 each), Sweden (6), Canada, China, Germany, Italy, Malaysia, Russia, United Arab Emirates (3 each), Argentina (2), Bulgaria, The Netherlands, Ukraine, UK, and Uruguay (1 each). |
![]() ![]() You may like...
Computational Fluid Dynamics in Fire…
Guan Heng Yeoh, Kwok Kit Yuen
Hardcover
The Cooperative Enterprise - Practical…
Gert van Dijk, Panagiota Sergaki, …
Hardcover
R3,894
Discovery Miles 38 940
Edge/Fog Computing Paradigm: The…
Pethuru Raj, Kavita Saini, …
Hardcover
Fuzzy Graph Theory with Applications to…
John N. Mordeson, Sunil Mathew, …
Paperback
R4,102
Discovery Miles 41 020
|