![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
This collection of papers takes linguists to the leading edge of techniques in generative lexicon theory, the linguistic composition methodology that arose from the imperative to provide a compositional semantics for the contextual modifications in meaning that emerge in real linguistic usage. Today's growing shift towards distributed compositional analyses evinces the applicability of GL theory, and the contributions to this volume, presented at three international workshops (GL-2003, GL-2005 and GL-2007) address the relationship between compositionality in language and the mechanisms of selection in grammar that are necessary to maintain this property. The core unresolved issues in compositionality, relating to the interpretation of context and the mechanisms of selection, are treated from varying perspectives within GL theory, including its basic theoretical mechanisms and its analytical viewpoint on linguistic phenomena.
This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue is the ultimate challenge in natural language processing, and the key to a wide range of exciting applications. The breadth and depth of coverage of this book makes it suitable as a reference and overview of the state of the field for researchers in Computational Linguistics, Semantics, Computer Science, Cognitive Science, and Artificial Intelligence. "
This book brings together scientists, researchers, practitioners, and students from academia and industry to present recent and ongoing research activities concerning the latest advances, techniques, and applications of natural language processing systems, and to promote the exchange of new ideas and lessons learned. Taken together, the chapters of this book provide a collection of high-quality research works that address broad challenges in both theoretical and applied aspects of intelligent natural language processing. The book presents the state-of-the-art in research on natural language processing, computational linguistics, applied Arabic linguistics and related areas. New trends in natural language processing systems are rapidly emerging - and finding application in various domains including education, travel and tourism, and healthcare, among others. Many issues encountered during the development of these applications can be resolved by incorporating language technology solutions. The topics covered by the book include: Character and Speech Recognition; Morphological, Syntactic, and Semantic Processing; Information Extraction; Information Retrieval and Question Answering; Text Classification and Text Mining; Text Summarization; Sentiment Analysis; Machine Translation Building and Evaluating Linguistic Resources; and Intelligent Language Tutoring Systems.
Collaboratively Constructed Language Resources (CCLRs) such as
Wikipedia, Wiktionary, Linked Open Data, and various resources
developed using crowdsourcing techniques such as Games with a
Purpose and Mechanical Turk have substantially contributed to the
research in natural language processing (NLP). Various NLP tasks
utilize such resources to substitute for or supplement conventional
lexical semantic resources and linguistically annotated corpora.
These resources also provide an extensive body of texts from which
valuable knowledge is mined. There are an increasing number of
community efforts to link and maintain multiple linguistic
resources.
"Corpora and Language Education" critically examines key concepts and issues in corpus linguistics, with a particular focus on the expanding interdisciplinary nature of the field and the role that written and spoken corpora now play in the fields of professional communication, teacher education, translation studies, lexicography, literature, critical discourse analysis and forensic linguistics. The book also presents a series of corpus-based case studies illustrating central themes and best practices in the field.
Recent advances in the fields of knowledge representation, reasoning and human-computer interaction have paved the way for a novel approach to treating and handling context. The field of research presented in this book addresses the problem of contextual computing in artificial intelligence based on the state of the art in knowledge representation and human-computer interaction. The author puts forward a knowledge-based approach for employing high-level context in order to solve some persistent and challenging problems in the chosen showcase domain of natural language understanding. Specifically, the problems addressed concern the handling of noise due to speech recognition errors, semantic ambiguities, and the notorious problem of underspecification. Consequently the book examines the individual contributions of contextual composing for different types of context. Therefore, contextual information stemming from the domain at hand, prior discourse, and the specific user and real world situation are considered and integrated in a formal model that is applied and evaluated employing different multimodal mobile dialog systems. This book is intended to meet the needs of readers from at least three fields - AI and computer science; computational linguistics; and natural language processing - as well as some computationally oriented linguists, making it a valuable resource for scientists, researchers, lecturers, language processing practitioners and professionals as well as postgraduates and some undergraduates in the aforementioned fields. "The book addresses a problem of great and increasing technical and practical importance - the role of context in natural language processing (NLP). It considers the role of context in three important tasks: Automatic Speech Recognition, Semantic Interpretation, and Pragmatic Interpretation. Overall, the book represents a novel and insightful investigation into the potential of contextual information processing in NLP." Jerome A Feldman, Professor of Electrical Engineering and Computer Science, UC Berkeley, USA http://dm.tzi.de/research/contextual-computing/
This book is about the role of knowledge in information systems. Knowledge is usually articulated and exchanged through human language(s). In this sense, language can be seen as the most natural vehicle to convey our concepts, whose meanings are usually intermingled, grouped and organized according to shared criteria, from simple perceptions ( every tree has a stem ) and common sense ( unsupported objects fall ) to complex social conventions ( a tax is a fee charged by a government on a product, income, or activity ). But what is natural for a human being turns out to be extremely difficult for machines: machines need to be instilled with knowledge and suitably equipped with logical and statistical algorithms to reason over it. Computers can t represent the external world and communicate their representations as effectively as humans do: ontologies and NLP have been invented to face this problem: in particular, integrating ontologies with (possibly multi-lingual) computational lexical resources is an essential requirement to make human meanings understandable by machines. This book explores the advancements in this integration, from the most recent steps in building the necessary infrastructure, i.e. the Semantic Web, to the different knowledge contents that can be analyzed, encoded and transferred (multimedia, emotions, events, etc.) through it. The work aims at presenting the progress in the field of integrating ontologies and lexicons: together, they constitute the essential technology for adequately represent, elicit and exchange knowledge contents in information systems, web services, text processing and several other domains of application.
This book focuses on information literacy for the younger generation of learners and library readers. It is divided into four sections: 1. Information Literacy for Life; 2. Searching Strategies, Disciplines and Special Topics; 3. Information Literacy Tools for Evaluating and Utilizing Resources; 4. Assessment of Learning Outcomes. Written by librarians with wide experience in research and services, and a strong academic background in disciplines such as the humanities, social sciences, information technology, and library science, this valuable reference resource combines both theory and practice. In today's ever-changing era of information, it offers students of library and information studies insights into information literacy as well as learning tips they can use for life.
The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.
This book focuses on the next generation optical networks as well as mobile communication technologies. The reader will find chapters on Cognitive Optical Network, 5G Cognitive Wireless, LTE, Data Analysis and Natural Language Processing. It also presents a comprehensive view of the enhancements and requirements foreseen for Machine Type Communication. Moreover, some data analysis techniques and Brazilian Portuguese natural language processing technologies are also described here.
This book provides a gradual introduction to the naming game, starting from the minimal naming game, where the agents have infinite memories (Chapter 2), before moving on to various new and advanced settings: the naming game with agents possessing finite-sized memories (Chapter 3); the naming game with group discussions (Chapter 4); the naming game with learning errors in communications (Chapter 5) ; the naming game on multi-community networks (Chapter 6) ; the naming game with multiple words or sentences (Chapter 7) ; and the naming game with multiple languages (Chapter 8). Presenting the authors' own research findings and developments, the book provides a solid foundation for future advances. This self-study resource is intended for researchers, practitioners, graduate and undergraduate students in the fields of computer science, network science, linguistics, data engineering, statistical physics, social science and applied mathematics.
The accurate determination of the speech spectrum, particularly for short frames, is commonly pursued in diverse areas including speech processing, recognition, and acoustic phonetics. With this book the author makes the subject of spectrum analysis understandable to a wide audience, including those with a solid background in general signal processing and those without such background. In keeping with these goals, this is not a book that replaces or attempts to cover the material found in a general signal processing textbook. Some essential signal processing concepts are presented in the first chapter, but even there the concepts are presented in a generally understandable fashion as far as is possible. Throughout the book, the focus is on applications to speech analysis; mathematical theory is provided for completeness, but these developments are set off in boxes for the benefit of those readers with sufficient background. Other readers may proceed through the main text, where the key results and applications will be presented in general heuristic terms, and illustrated with software routines and practical "show-and-tell" discussions of the results. At some points, the book refers to and uses the implementations in the Praat speech analysis software package, which has the advantages that it is used by many scientists around the world, and it is free and open source software. At other points, special software routines have been developed and made available to complement the book, and these are provided in the Matlab programming language. If the reader has the basic Matlab package, he/she will be able to immediately implement the programs in that platform---no extra "toolboxes" are required.
There is increasing interaction among communities with multiple languages, thus we need services that can effectively support multilingual communication. The Language Grid is an initiative to build an infrastructure that allows end users to create composite language services for intercultural collaboration. The aim is to support communities to create customized multilingual environments by using language services to overcome local language barriers. The stakeholders of the Language Grid are the language resource providers, the language service users, and the language grid operators who coordinate the former. This book includes 18 chapters in six parts that summarize various research results and associated development activities on the Language Grid. The chapters in Part I describe the framework of the Language Grid, i.e., service-oriented collective intelligence, used to bridge providers, users and operators. Two kinds of software are introduced, the service grid server software and the Language Grid Toolbox, and code for both is available via open source licenses. Part II describes technologies for service workflows that compose atomic language services. Part III reports on research work and activities relating to sharing and using language services. Part IV describes various applications of language services as applicable to intercultural collaboration. Part V contains reports on applying the Language Grid for translation activities, including localization of industrial documents and Wikipedia articles. Finally, Part VI illustrates how the Language Grid can be connected to other service grids, such as DFKI's Heart of Gold and smart classroom services in Tsinghua University in Beijing. The book will be valuable for researchers in artificial intelligence, natural language processing, services computing and human--computer interaction, particularly those who are interested in bridging technologies and user communities. "
Universal codes efficiently compress sequences generated by stationary and ergodic sources with unknown statistics, and they were originally designed for lossless data compression. In the meantime, it was realized that they can be used for solving important problems of prediction and statistical analysis of time series, and this book describes recent results in this area. The first chapter introduces and describes the application of universal codes to prediction and the statistical analysis of time series; the second chapter describes applications of selected statistical methods to cryptography, including attacks on block ciphers; and the third chapter describes a homogeneity test used to determine authorship of literary texts. The book will be useful for researchers and advanced students in information theory, mathematical statistics, time-series analysis, and cryptography. It is assumed that the reader has some grounding in statistics and in information theory.
This is the first volume that brings together research and practice from academic and industry settings and a combination of human and machine translation evaluation. Its comprehensive collection of papers by leading experts in human and machine translation quality and evaluation who situate current developments and chart future trends fills a clear gap in the literature. This is critical to the successful integration of translation technologies in the industry today, where the lines between human and machine are becoming increasingly blurred by technology: this affects the whole translation landscape, from students and trainers to project managers and professionals, including in-house and freelance translators, as well as, of course, translation scholars and researchers. The editors have broad experience in translation quality evaluation research, including investigations into professional practice with qualitative and quantitative studies, and the contributors are leading experts in their respective fields, providing a unique set of complementary perspectives on human and machine translation quality and evaluation, combining theoretical and applied approaches.
This book is written for both linguists and computer scientists working in the field of artificial intelligence as well as to anyone interested in intelligent text processing. Lexical function is a concept that formalizes semantic and syntactic relations between lexical units. Collocational relation is a type of institutionalized lexical relations which holds between the base and its partner in a collocation. Knowledge of collocation is important for natural language processing because collocation comprises the restrictions on how words can be used together. The book shows how collocations can be annotated with lexical functions in a computer readable dictionary - allowing their precise semantic analysis in texts and their effective use in natural language applications including parsers, high quality machine translation, periphrasis system and computer-aided learning of lexica. The books shows how to extract collocations from corpora and annotate them with lexical functions automatically. To train algorithms, the authors created a dictionary of lexical functions containing more than 900 Spanish disambiguated and annotated examples which is a part of this book. The obtained results show that machine learning is feasible to achieve the task of automatic detection of lexical functions.
The volume "Genres on the Web" has been designed for a wide audience, from the expert to the novice. It is a required book for scholars, researchers and students who want to become acquainted with the latest theoretical, empirical and computational advances in the expanding field of web genre research. The study of web genre is an overarching and interdisciplinary novel area of research that spans from corpus linguistics, computational linguistics, NLP, and text-technology, to web mining, webometrics, social network analysis and information studies. This book gives readers a thorough grounding in the latest research on web genres and emerging document types. The book covers a wide range of web-genre focused subjects, such
as: One of the driving forces behind genre research is the idea of a genre-sensitive information system, which incorporates genre cues complementing the current keyword-based search and retrieval applications."
area and in applications to linguistics, formal epistemology, and the study of norms. The second contains papers on non-classical and many-valued logics, with an eye on applications in computer science and through it to engineering. The third concerns the logic of belief management, whichis likewise closely connected with recent work in computer science but also links directly with epistemology, the philosophy of science, the study of legal and other normative systems, and cognitive science. The grouping is of course rough, for there are contributions to the volume that lie astride a boundary; at least one of them is relevant, from a very abstract perspective, to all three areas. We say a few words about each of the individual chapters, to relate them to each other and the general outlook of the volume. Modal Logics The ?rst bundle of papers in this volume contains contribution to modal logic. Three of them examine general problems that arise for all kinds of modal logics. The ?rst paper is essentially semantical in its approach, the second proof-theoretic, the third semantical again: Commutativity of quanti?ers in varying-domain Kripke models, by R. Goldblatt and I. Hodkinson, investigates the possibility of com- tation (i.e. reversing the order) for quanti?ers in ?rst-order modal logics interpreted over relational models with varying domains. The authors study a possible-worlds style structural model theory that does not v- idate commutation, but satis?es all the axioms originally presented by Kripke for his familiar semantics for ?rst-order modal logic."
This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.
Mathematical Linguistics introduces the mathematical foundations of linguistics to computer scientists, engineers, and mathematicians interested in natural language processing. The book presents linguistics as a cumulative body of knowledge from the ground up: no prior knowledge of linguistics is assumed. Previous textbooks in this area concentrate on syntax and semantics - this comprehensive volume covers an extremely rich array of topics also including phonology and morphology, probabilistic approaches, complexity, learnability, and the analysis of speech and handwriting. As the first textbook of its kind, this book is useful for those in information science (information retrieval and extraction, search engines) and in natural language technologies (speech recognition, optical character recognition, HCI). Exercises suitable for the advanced reader are included, as well as suggestions for further reading and an extensive bibliography.
This book applies linguistic analysis to the poetry of Emeritus Professor Edwin Thumboo, a Singaporean poet and leading figure in Commonwealth literature. The work explores how the poet combines grammar and metaphor to create meaning, making the reader aware of the linguistic resources developed by Thumboo as the basis for his unique technique. The author approaches the poems from a functional linguistic perspective, investigating the multiple layers of meaning and metaphor that go into producing these highly textured, grammatically intricate verbal works of art. The approach is based on the Systemic Functional Theory, which aids the study of how the poet uses language (grammar) to craft his text in a playful way that reflects a love of the language. The multilingual and multicultural experiences of the poet are considered to have contributed to his uniquely creative use of language. This work demonstrates how the Systemic Functional Theory, with its emphasis on exploring the semogenic (meaning-making) power of language, provides the perspective we need to better understand poets' works as intentional acts of meaning. Readers will discover how the works of Edwin Thumboo illustrate well a point made by Barthes, who noted that "Bits of code, formulae, rhythmic models, fragments of social languages, etc. pass into the text and are redistributed within it, for there is always language before and around the text." With a focus on meaning, this functional analysis of poetry offers an insightful look at the linguistic basis of Edwin Thumboo's poetic technique. The work will appeal to scholars with an interest in linguistic analysis and poetry from the Commonwealth and new literature, and it can also be used to support courses on literary stylistics or text linguistics.
Research in Natural Language Processing (NLP) has rapidly advanced in recent years, resulting in exciting algorithms for sophisticated processing of text and speech in various languages. Much of this work focuses on English; in this book we address another group of interesting and challenging languages for NLP research: the Semitic languages. The Semitic group of languages includes Arabic (206 million native speakers), Amharic (27 million), Hebrew (7 million), Tigrinya (6.7 million), Syriac (1 million) and Maltese (419 thousand). Semitic languages exhibit unique morphological processes, challenging syntactic constructions and various other phenomena that are less prevalent in other natural languages. These challenges call for unique solutions, many of which are described in this book. The 13 chapters presented in this book bring together leading scientists from several universities and research institutes worldwide. While this book devotes some attention to cutting-edge algorithms and techniques, its primary purpose is a thorough explication of best practices in the field. Furthermore, every chapter describes how the techniques discussed apply to Semitic languages. The book covers both statistical approaches to NLP, which are dominant across various applications nowadays and the more traditional, rule-based approaches, that were proven useful for several other application domains. We hope that this book will provide a "one-stop-shop'' for all the requisite background and practical advice when building NLP applications for Semitic languages.
This book presents a comprehensive overview of semi-supervised approaches to dependency parsing. Having become increasingly popular in recent years, one of the main reasons for their success is that they can make use of large unlabeled data together with relatively small labeled data and have shown their advantages in the context of dependency parsing for many languages. Various semi-supervised dependency parsing approaches have been proposed in recent works which utilize different types of information gleaned from unlabeled data. The book offers readers a comprehensive introduction to these approaches, making it ideally suited as a textbook for advanced undergraduate and graduate students and researchers in the fields of syntactic parsing and natural language processing.
This book provides an in-depth description of the framework of inductive dependency parsing, a methodology for robust and efficient syntactic analysis of unrestricted natural language text. This methodology is based on two essential components: dependency-based syntactic representations and a data-driven approach to syntactic parsing. More precisely, it is based on a deterministic parsing algorithm in combination with inductive machine learning to predict the next parser action. The book includes a theoretical analysis of all central models and algorithms, as well as a thorough empirical evaluation of memory-based dependency parsing, using data from Swedish and English. Offering the reader a one-stop reference to dependency-based parsing of natural language, it is intended for researchers and system developers in the language technology field, and is also suited for graduate or advanced undergraduate education.
Contemporary data analytics involves extracting insights from data and translating them into action. With its turn towards empirical methods and convergent data sources, cognitive linguistics is a fertile context for data analytics. There are key differences between data analytics and statistical analysis as typically conceived. Though the former requires the latter, it emphasizes the role of domain-specific knowledge. Statistical analysis also tends to be associated with preconceived hypotheses and controlled data. Data analytics, on the other hand, can help explore unstructured datasets and inspire emergent questions. This volume addresses two key aspects in data analytics for cognitive linguistic work. Firstly, it elaborates the bottom-up guiding role of data analytics in the research trajectory, and how it helps to formulate and refine questions. Secondly, it shows how data analytics can suggest concrete courses of research-based action, which is crucial for cognitive linguistics to be truly applied. The papers in this volume impart various data analytic methods and report empirical studies across different areas of research and application. They aim to benefit new and experienced researchers alike. |
![]() ![]() You may like...
Advanced Analytical Methods in Tribology
Martin Dienwiebel, Maria-Isabel De Barros Bouchet
Hardcover
R4,639
Discovery Miles 46 390
Stress Corrosion Cracking - Theory and…
V. S. Raja, T. Shoji
Paperback
Characterization of Minerals, Metals…
Jian Li, Mingming Zhang, …
Hardcover
R6,449
Discovery Miles 64 490
Adverse Effects of Engineered…
Bengt Fadeel, Antonio Pietroiusti, …
Hardcover
R3,441
Discovery Miles 34 410
Electrospinning of Graphene
Santosh K Tiwari, Sumanta Sahoo, …
Hardcover
R4,576
Discovery Miles 45 760
Non-Destructive In Situ Strength…
Denys Breysse, Jean Paul Balayssac
Hardcover
R4,166
Discovery Miles 41 660
Service Life Prediction of Polymers and…
Christopher White, Kenneth M. White, …
Hardcover
R5,279
Discovery Miles 52 790
|