![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
The annual Text, Speech and Dialogue Conference (TSD), which originated in 1998, is now starting its second decade. So far almost 900 authors from 45 countries have contributed to the proceedings. TSD constitutes a recognizedplatform for the presen- tion and discussion of state-of-the-art technology and recent achievements in the ?eld of natural language processing. It has become an interdisciplinary forum, interweaving the themes of speech technology and language processing. The conference attracts - searchers not only from Central and Eastern Europe, but also from other parts of the world. Indeed, one of its goals has always been to bring together NLP researchers with different interests from different parts of the world and to promote their mutual co- eration. One of the ambitions of the conference is, as its title says, not only to deal with dialogue systems as such, but also to contribute to improving dialogue between researchers in the two areas of NLP, i. e., between text and speech people. In our view, the TSD conference was successful in this respect in 2008 as well. This volume contains the proceedings of the 11th TSD conference, held in Brno, Czech Republic in September 2008. Following the review process, 79 papers were - ceptedoutof173submitted, anacceptancerateof45. 7%.
The International Conference on Computational Processing on Portuguese, f- merly the Workshop on Computational Processing of the Portuguese Language - PROPOR- is the main event in the area of Natural LanguageProcessing that focusesonPortugueseandthe theoreticalandtechnologicalissuesrelatedto this speci?c language. The meeting has been a very rich forum for the interchange of ideas and partnerships for the researchcommunities dedicated to the automated processing of the Portuguese language. This year's PROPOR, the ?rst one to adopt the International Conference - bel, followedworkshopsheld in Lisbon, Portugal(1993), Curitiba, Brazil(1996), PortoAlegre, Brazil(1998), Evora, Portugal(1999), Atibaia, Brazil(2000), Faro, Portugal (2003) and Itatiaia, Brazil (2006). The constitutionofasteeringcommittee (PROPORCommittee), aninter- tional program committee, the adoption of high-standard refereing procedures and the support of the prestigious ACL and ISCA international associations demonstrate the steady development of the ?eld and of its scienti?c community. A total of 63 papers were submitted to PROPOR 2008. Each submitted paper received a careful, triple-blind review by the program committee or by their commitment. All those who contributed are mentioned on the following pages. The reviewing process led to the selection of 21 regular papers for oral presentation and 16 short papers for poster sessions. The workshop and this book were structured around the following main t- ics: Speech Analysis; Ontologies, Semantics and Anaphora Resolution; Speech Synthesis; Machine Learning Applied to Natural Language Processing; Speech Recognition and Natural Language Processing Tools and Applications. Short papers and related posters were organized according to the two main areas of PROPOR: Natural Language Processing and Speech Technology.
This book provides an in-depth view of the current issues, problems and approaches in the computation of meaning as expressed in language. Aimed at linguists, computer scientists, and logicians with an interest in the computation of meaning, this book focuses on two main topics in recent research in computational semantics. The first topic is the definition and use of underspecified semantic representations, i.e. formal structures that represent part of the meaning of a linguistic object while leaving other parts unspecified. The second topic discussed is semantic annotation. Annotated corpora have become an indispensable resource both for linguists and for developers of language and speech technology, especially when used in combination with machine learning methods. The annotation in corpora has only marginally addressed semantic information, however, since semantic annotation methodologies are still in their infancy. This book discusses the development and application of such methodologies.
As online information grows dramatically, search engines such as Google are playing a more and more important role in our lives. Critical to all search engines is the problem of designing an effective retrieval model that can rank documents accurately for a given query. This has been a central research problem in information retrieval for several decades. In the past ten years, a new generation of retrieval models, often referred to as statistical language models, has been successfully applied to solve many different information retrieval problems. Compared with the traditional models such as the vector space model, these new models have a more sound statistical foundation and can leverage statistical estimation to optimize retrieval parameters. They can also be more easily adapted to model non-traditional and complex retrieval problems. Empirically, they tend to achieve comparable or better performance than a traditional model with less effort on parameter tuning. This book systematically reviews the large body of literature on applying statistical language models to information retrieval with an emphasis on the underlying principles, empirically effective language models, and language models developed for non-traditional retrieval tasks. All the relevant literature has been synthesized to make it easy for a reader to digest the research progress achieved so far and see the frontier of research in this area. The book also offers practitioners an informative introduction to a set of practically useful language models that can effectively solve a variety of retrieval problems. No prior knowledge about information retrieval is required, but some basic knowledge about probability and statistics would be useful for fully digesting all the details. Table of Contents: Introduction / Overview of Information Retrieval Models / Simple Query Likelihood Retrieval Model / Complex Query Likelihood Model / Probabilistic Distance Retrieval Model / Language Models for Special Retrieval Tasks / Language Models for Latent Topic Analysis / Conclusions
In its nine chapters, this book provides an overview of the state-of-the-art and best practice in several sub-fields of evaluation of text and speech systems and components. The evaluation aspects covered include speech and speaker recognition, speech synthesis, animated talking agents, part-of-speech tagging, parsing, and natural language software like machine translation, information retrieval, question answering, spoken dialogue systems, data resources, and annotation schemes. With its broad coverage and original contributions this book is unique in the field of evaluation of speech and language technology. This book is of particular relevance to advanced undergraduate students, PhD students, academic and industrial researchers, and practitioners.
CICLing 2008 (www. CICLing. org) was the 9th Annual Conference on Intel- gent Text Processing and Computational Linguistics. The CICLing conferences are intended to provide a wide-scope forum for the discussion of both the art and craft of natural language processing research and the best practices in its applications. This volume contains the papers accepted for oral presentation at the c- ference, as well as several of the best papers accepted for poster presentation. Other papers accepted for poster presentationwerepublished in specialissues of other journals(seethe informationonthe website). Since 2001the CICLing p- ceedings have been published in Springer's Lecture Notes in Computer Science series, as volumes 2004, 2276, 2588, 2945, 3406, 3878, and 4394. The book consists of 12 sections, representative of the main tasks and app- cations of Natural Language Processing: - Language resources - Morphology and syntax - Semantics and discourse - Word sense disambiguation and named entity recognition - Anaphora and co-reference - Machine translation and parallel corpora - Natural language generation - Speech recognition - Information retrieval and question answering - Text classi?cation - Text summarization - Spell checking and authoring aid A total of 204 papers by 438 authors from 39 countries were submitted for evaluation (see Tables 1 and 2). Each submission was reviewed by at least two independent Program Committee members. This volume contains revised v- sions of 52 papers by 129 authors from 24 countries selected for inclusion in the conference program (the acceptance rate was 25. 5%).
This book constitutes the thoroughly refereed post-proceedings of the Joint Chinese-German Workshop on Cognitive Systems held in Shanghai in March 2005. The 13 revised papers presented were carefully reviewed and selected from numerous submissions for inclusion in the book. The workshop served to present the current state of the art in the new transdiscipline of cognitive systems, which is emerging from computer science, the neurosciences, computational linguistics, neurological networks and the new philosophy of mind. The papers are organized in topical sections on multimodal human-computer interfaces, neuropsychology and neurocomputing, Chinese-German natural language processing and psycholinguistics, as well as information processing and retrieval from the semantic Web for intelligent applications.
In this pioneering book Katarzyna Jaszczolt lays down the
foundations of an original theory of meaning in discourse, reveals
the cognitive foundations of discourse interpretation, and puts
forward a new basis for the analysis of discourse processing. She
provides a step-by-step introduction to the theory and its
application, and explains new terms and formalisms as required. Dr.
Jaszczolt unites the precision of truth-conditional, dynamic
approaches with insights from neo-Gricean pragmatics into the role
of speaker's intentions in communication. She shows that the
compositionality of meaning may be understood as merger
representations combining information from various sources
including word meaning and sentence structure, various kinds of
default interpretations, and conscious pragmatic inference.
This book constitutes the refereed proceedings of the 7th International Conference on Computational Linguistics and Intelligent Text Processing, held in February 2006. The 43 revised full papers and 16 revised short papers presented together with three invited papers were carefully reviewed and selected from 176 submissions. The papers are structured into two parts and organized in topical sections on computational linguistics research.
This reader collects and introduces important work in linguistics, computer science, artificial intelligence, and computational linguistics on the use of linguistic devices in natural languages to situate events in time: whether they are past, present, or future; whether they are real or hypothetical; when an event might have occurred, and how long it could have lasted. In focussing on the treatment and retrieval of time-based information it seeks to lay the foundation for temporally-aware natural language computer processing systems, for example those that process documents on the worldwide web to answer questions or produce summaries. The development of such systems requires the application of technical knowledge from many different disciplines. The book is the first to bring these disciplines together, by means of classic and contemporary papers in four areas: tense, aspect, and event structure; temporal reasoning; the temporal structure of natural language discourse; and temporal annotation. Clear, self-contained editorial introductions to each area provide the necessary technical background for the non-specialist, explaining the underlying connections across disciplines. A wide range of students and professionals in academia and industry will value this book as an introduction and guide to a new and vital technology. The former include researchers, students, and teachers of natural language processing, linguistics, artificial intelligence, computational linguistics, computer science, information retrieval (including the growing speciality of question-answering), library sciences, human-computer interaction, and cognitive science. Those in industry include corporate managers and researchers, software product developers, and engineers in information-intensive companies, such as on-line database and web-service providers.
CICLing 2004 was the 5th Annual Conference on Intelligent Text Processing and Computational Linguistics; see www.CICLing.org. CICLing conferences are intended to provide a balanced view of the cutting-edge developments in both theoretical foundations of computational linguistics and the practice of natural language text processing with its numerous applications. A feature of CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. These conferences are a forum for dialogue between the specialists working in the two areas. This year we were honored by the presence of our invited speakers Martin KayofStanfordUniversity, PhilipResnikoftheUniversityofMaryland, Ricardo Baeza-Yates of the University of Chile, and Nick Campbell of the ATR Spoken Language Translation Research Laboratories. They delivered excellent extended lectures and organized vivid discussions. Of129submissionsreceived(74fullpapersand44shortpapers), aftercareful international reviewing 74 papers were selected for presentation (40 full papers and35shortpapers), writtenby176authorsfrom21countries: Korea(37), Spain (34), Japan (22), Mexico (15), China (11), Germany (10), Ireland (10), UK (10), Singapore (6), Canada (3), Czech Rep. (3), France (3), Brazil (2), Sweden (2), Taiwan (2), Turkey (2), USA (2), Chile (1), Romania (1), Thailand (1), and The Netherlands (1); the ?gures in parentheses stand for the number of authors from the corresponding co
The structure and properties of any natural language expression depend on its component sub-expressions - "resources" - and relations among them that are sensitive to basic structural properties of order, grouping, and multiplicity. Resource-sensitivity thus provides a perspective on linguistic structure that is well-defined and universally-applicable. The papers in this collection - by J. van Benthem, P. Jacobson, G. JAger, G-J. Kruijff, G. Morrill, R. Muskens, R. Oehrle, and A. Szabolcsi - examine linguistic resources and resource-sensitivity from a variety of perspectives, including: - Modal aspects of categorial type inference; In particular, the book contains a number of papers treating anaphorically-dependent expressions as functions, whose application to an appropriate argument yields a type and an interpretation directly integratable with the surrounding grammatical structure. To situate this work in a larger setting, the book contains two appendices: - an introductory guide to resource-sensivity;
This book constitutes the refereed proceedings of the 4th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing 2003, held in Mexico City, Mexico in February 2003. The 67 revised papers presented together with 4 keynote papers were carefully reviewed and selected from 92 submissions. The papers are organized in topical sections on computational linguistics formalisms; semantics and discourse; syntax and POS tagging; parsing techniques; morphology; word sense disambiguation; dictionary, lexicon, and ontology; corpus and language statistics; machine translation and bilingual corpora; text generation; natural language interfaces; speech processing; information retrieval and information extraction; text categorization and clustering; summarization; and spell-checking.
CICLing2002wasthethirdannualConferenceonIntelligenttextprocessingand Computational Linguistics (hence the name CICLing); see www.CICLing.org. It was intended to provide a balanced view of the cutting edge developments in both theoretical foundations of computational linguistics and practice of natural language text processing with its numerous applications. A feature of CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. The c- ference is a forum for dialogue between the specialists working in these two areas. This year we were honored by the presence of our invited speakers Ni- lettaCalzolari (Inst. for Computational Linguistics, Italy), Ruslan Mitkov (U.of Wolverhampton, UK), Ivan Sag (Stanford U., USA), Yorick Wilks (U. of She?eld), and Antonio Zampolli (Inst. for Computational Linguistics, Italy). They delivered excellent extended lectures and organized vivid discussions. Of 67 submissions received, after careful reviewing 48 were selected for p- sentation; of them, 35 as full papers and 13 as short papers; by 98 authors from 19countries: Spain (18 authors), Mexico (13), Japan, UK (8each), Israel (7), Germany, Italy, USA (6each), Switzerland (5), Taiwan(4), Ireland (3), A- tralia, China, CzechRep., France, Russia (2each), Bulgaria, Poland, Romania (1 each).
This book constitutes the refereed proceedings of the 4th
International Conference on Text, Speech and Dialogue, TSD 2001,
held in Zelezna Ruda, Czech Republic in September 2001.
This book constitutes the refereed proceedings of the scientific
track of the 7th Congress of the Italian Association for Artificial
Intelligence, AI*IA 2001, held in Bari, Italy, in September
2001.
CICLing 2001 is the second annual Conference on Intelligent text processing and Computational Linguistics (hence the name CICLing), see www.CICLing.org. It is intended to provide a balanced view of the cutting edge developments in both theoretical foundations of computational linguistics and practice of natural language text processing with its numerous applications. A feature of the CICLing conferences is their wide scope that covers nearly all areas of computational linguistics and all aspects of natural language processing applications. The conference is a forum for dialogue between the specialists working in these two areas. This year our invited speakers were Graeme Hirst (U. Toronto, Canada), Sylvain Kahane (U. Paris 7, France), and Ruslan Mitkov (U. Wolverhampton, UK). They delivered excellent extended lectures and organized vivid discussions. A total of 72 submissions were received, all but very few of surprisingly high quality. After careful reviewing, the Program Committee selected for presentation 53 of them, 41 as full papers and 12 as short papers, by 98 authors from 19 countries: Spain (19 authors), Japan (15), USA (12), France, Mexico (9 each), Sweden (6), Canada, China, Germany, Italy, Malaysia, Russia, United Arab Emirates (3 each), Argentina (2), Bulgaria, The Netherlands, Ukraine, UK, and Uruguay (1 each).
This innovative book develops a formal computational theory of writing systems and relates it to psycholinguistic results. Drawing on case studies of writing systems around the world, it offers specific proposals about the linguistic objects that are represented by orthographic elements and the formal constraints that hold of the mapping relation between them. Based on the insights gained, it posits a new taxonomy of writing systems. The book will be of interest to students and researchers in theoretical and computational linguistics, the psycholinguistics of reading and writing, and speech technology.
This volume collects landmark research in a burgeoning field of visual analytics for linguistics, called LingVis. Combining linguistic data and linguistically oriented research questions with techniques and methodologies developed in the computer science fields of visual analytics and information visualization, LingVis is motivated by the growing need within linguistic research for dealing with large amounts of complex, multidimensional data sets. An innovative exploration into the future of LingVis in the digital age, this foundational book both provides a representation of the current state of the field and communicates its new possibilities for addressing complex linguistic questions across the larger linguistic community.
Information extraction (IE) is a new technology enabling relevant content to be extracted from textual information available electronically. IE essentially builds on natural language processing and computational linguistics, but it is also closely related to the well established area of information retrieval and involves learning. In concert with other promising intelligent information processing technologies like data mining, intelligent data analysis, text summarization, and information agents, IE plays a crucial role in dealing with the vast amounts of information accessible electronically, for example from the Internet. The book is based on the Second International School on Information Extraction, SCIE-99, held in Frascati near Rome, Italy in June/July 1999.
From tech giants to plucky startups, the world is full of companies boasting that they are on their way to replacing human interpreters, but are they right? Interpreters vs Machines offers a solid introduction to recent theory and research on human and machine interpreting, and then invites the reader to explore the future of interpreting. With a foreword by Dr Henry Liu, the 13th International Federation of Translators (FIT) President, and written by consultant interpreter and researcher Jonathan Downie, this book offers a unique combination of research and practical insight into the field of interpreting. Written in an innovative, accessible style with humorous touches and real-life case studies, this book is structured around the metaphor of playing and winning a computer game. It takes interpreters of all experience levels on a journey to better understand their own work, learn how computers attempt to interpret and explore possible futures for human interpreters. With five levels and split into 14 chapters, Interpreters vs Machines is key reading for all professional interpreters as well as students and researchers of Interpreting and Translation Studies, and those with an interest in machine interpreting.
The idea that the expression of radical beliefs is a predictor to future acts of political violence has been a central tenet of counter-extremism over the last two decades. Not only has this imposed a duty upon doctors, lecturers and teachers to inform on the radical beliefs of their patients and students but, as this book argues, it is also a fundamentally flawed concept. Informed by his own experience with the UK's Prevent programme while teaching in a Muslim community, Rob Faure Walker explores the linguistic emergence of 'extremism' in political discourse and the potentially damaging generative effect of this language. Taking a new approach which combines critical discourse analysis with critical realism, this book shows how the fear of being labelled as an 'extremist' has resulted in counter-terrorism strategies which actually undermine moderating mechanisms in a democracy. Analysing the generative mechanisms by which the language of counter-extremism might actually promote violence, Faure Walker explains how understanding the potentially oppressive properties of language can help us transcend them. The result is an imminent critique of the most pernicious aspects of the global War on Terror, those that are embedded in our everyday language and political discourse. Drawing on the author's own successful lobbying activities against counter-extremism, this book presents a model for how discourse analysis and critical realism can and should engage with the political and how this will affect meaningful change.
The goal of this book is to integrate the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. Rarely do these two camps meet and discuss the demands and concerns of each other's fields. Therefore, this book is interesting in that it provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions. This book grew out of the papers presented at a workshop held at Brandeis University in April, 1988, funded by the American Association for Artificial Intelligence. The entire workshop as well as the discussion periods accom panying each talk were recorded. Once complete copies of each paper were available, they were distributed to participants, who were asked to provide written comments on the texts for review purposes. VII JAMES PUSTEJOVSKY 1. INTRODUCTION There is currently a growing interest in the content of lexical entries from a theoretical perspective as well as a growing need to understand the organization of the lexicon from a computational view. This volume attempts to define the directions that need to be taken in order to achieve the goal of a coherent theory of lexical organization."
Deep learning is revolutionizing how machine translation systems are built today. This book introduces the challenge of machine translation and evaluation - including historical, linguistic, and applied context -- then develops the core deep learning methods used for natural language applications. Code examples in Python give readers a hands-on blueprint for understanding and implementing their own machine translation systems. The book also provides extensive coverage of machine learning tricks, issues involved in handling various forms of data, model enhancements, and current challenges and methods for analysis and visualization. Summaries of the current research in the field make this a state-of-the-art textbook for undergraduate and graduate classes, as well as an essential reference for researchers and developers interested in other applications of neural methods in the broader field of human language processing.
This case study-based textbook in multivariate analysis for advanced students in the humanities emphasizes descriptive, exploratory analyses of various types of datasets from a wide range of sub-disciplines, promoting the use of multivariate analysis and illustrating its wide applicability. Fields featured include, but are not limited to, historical agriculture, arts (music and painting), theology, and stylometrics (authorship issues). Most analyses are based on existing data, earlier analysed in published peer-reviewed papers. Four preliminary methodological and statistical chapters provide general technical background to the case studies. The multivariate statistical methods presented and illustrated include data inspection, several varieties of principal component analysis, correspondence analysis, multidimensional scaling, cluster analysis, regression analysis, discriminant analysis, and three-mode analysis. The bulk of the text is taken up by 14 case studies that lean heavily on graphical representations of statistical information such as biplots, using descriptive statistical techniques to support substantive conclusions. Each study features a description of the substantive background to the data, followed by discussion of appropriate multivariate techniques, and detailed results interpreted through graphical illustrations. Each study is concluded with a conceptual summary. Datasets in SPSS are included online. |
![]() ![]() You may like...
Foundation Models for Natural Language…
Gerhard PaaĂź, Sven Giesselbach
Hardcover
Linguistic Inquiries into Donald…
Ulrike Schneider, Matthias Eitelmann
Hardcover
R4,138
Discovery Miles 41 380
Services Computing for Language…
Yohei Murakami, Donghui Lin, …
Hardcover
Recent Developments in Fuzzy Logic and…
Shahnaz N. Shahbazova, Michio Sugeno, …
Hardcover
R6,468
Discovery Miles 64 680
|