![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
This book provides a broad and comprehensive overview of the existing technical approaches in the area of silent speech interfaces (SSI), both in theory and in application. Each technique is described in the context of the human speech production process, allowing the reader to clearly understand the principles behind SSI in general and across different methods. Additionally, the book explores the combined use of different data sources, collected from various sensors, in order to tackle the limitations of simpler SSI approaches, addressing current challenges of this field. The book also provides information about existing SSI applications, resources and a simple tutorial on how to build an SSI.
This book offers the first detailed, comprehensible scientific presentation of Confabulation Theory, addressing a pressing scientific question: How does brain information processing, or cognition, work? With only elementary mathematics as a prerequisite, this book will prove accessible to technologists, scientists, and the educated public.
Explores the direct relation of modern CALL (Computer-Assisted Language Learning) to aspects of natural language processing for theoretical and practical applications, and worldwide demand for formal language education and training that focuses on restricted or specialized professional domains. Unique in its broad-based, state-of-the-art, coverage of current knowledge and research in the interrelated fields of computer-based learning and teaching and processing of specialized linguistic domains. The articles in this book offer insights on or analyses of the current state and future directions of many recent key concepts regarding the application of computers to natural languages, such as: authenticity, personalization, normalization, evaluation. Other articles present fundamental research on major techniques, strategies and methodologies that are currently the focus of international language research projects, both of a theoretical and an applied nature.
This book presents recent advances by leading researchers in computational modelling of language acquisition. The contributors have been drawn from departments of linguistics, cognitive science, psychology, and computer science. They show what light can be thrown on fundamental problems when powerful computational techniques are combined with real data. The book considers the extent to which linguistic structure is readily available in the environment, the degree to which language learning is inductive or deductive, and the power of different modelling formalisms for different problems and approaches. It will appeal to linguists, psychologists, cognitive scientists working in language acquisition,and to those involved in computational modelling in linguistic and behavioural science.
​This book is an excellent introduction to multiword expressions. It provides a unique, comprehensive and up-to-date overview of this exciting topic in computational linguistics. The first part describes the diversity and richness of multiword expressions, including many examples in several languages. These constructions are not only complex and arbitrary, but also much more frequent than one would guess, making them a real nightmare for natural language processing applications. The second part introduces a new generic framework for automatic acquisition of multiword expressions from texts. Furthermore, it describes the accompanying free software tool, the mwetoolkit, which comes in handy when looking for expressions in texts (regardless of the language). Evaluation is greatly emphasized, underlining the fact that results depend on parameters like corpus size, language, MWE type, etc. The last part contains solid experimental results and evaluates the mwetoolkit, demonstrating its usefulness for computer-assisted lexicography and machine translation. This is the first book to cover the whole pipeline of multiword expression acquisition in a single volume. It is addresses the needs of students and researchers in computational and theoretical linguistics, cognitive sciences, artificial intelligence and computer science. Its good balance between computational and linguistic views make it the perfect starting point for anyone interested in multiword expressions, language and text processing in general.
This book constitutes the refereed proceedings of the 20th and 21st International Conference on Formal Grammar 2015 and 2016, collocated with the European Summer School in Logic, Language and Information in August 2015/2016. The 19 revised full papers presented together with 2 invited talks were carefully reviewed and selected from a total of 34 submissions. The focus of papers are as follows: Formal and computational phonology, morphology, syntax, semantics and pragmatics Model-theoretic and proof-theoretic methods in linguistics Logical aspects of linguistic structure Constraint-based and resource-sensitive approaches to grammar Learnability of formal grammar Integration of stochastic and symbolic models of grammar Foundational, methodological and architectural issues in grammar and linguistics Mathematical foundations of statistical approaches to linguistic analysis
This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.
Presenting the digital humanities as both a domain of practice and as a set of methodological approaches to be applied to corpus linguistics and translation, chapters in this volume provide a novel and original framework to triangulate research for pursuing both scientific and educational goals within the digital humanities. They also highlight more broadly the importance of data triangulation in corpus linguistics and translation studies. Putting forward practical applications for digging into data, this book is a detailed examination of how to integrate quantitative and qualitative approaches through case studies, sample analysis and practical examples.
The book collects contributions from well-established researchers at the interface between language and cognition. It provides an overview of the latest insights into this interdisciplinary field from the perspectives of natural language processing, computer science, psycholinguistics and cognitive science. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: Lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: Lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject.
To date, the relation between multilingualism and the Semantic Web has not yet received enough attention in the research community. One major challenge for the Semantic Web community is to develop architectures, frameworks and systems that can help in overcoming national and language barriers, facilitating equal access to information produced in different cultures and languages. As such, this volume aims at documenting the state-of-the-art with regard to the vision of a Multilingual Semantic Web, in which semantic information will be accessible in and across multiple languages. The Multilingual Semantic Web as envisioned in this volume will support the following functionalities: (1) responding to information needs in any language with regard to semantically structured data available on the Semantic Web and Linked Open Data (LOD) cloud, (2) verbalizing and accessing semantically structured data, ontologies or other conceptualizations in multiple languages, (3) harmonizing, integrating, aggregating, comparing and repurposing semantically structured data across languages and (4) aligning and reconciling ontologies or other conceptualizations across languages. The volume is divided into three main sections: Principles, Methods and Applications. The section on "Principles" discusses models, architectures and methodologies that enrich the current Semantic Web architecture with features necessary to handle multiple languages. The section on "Methods" describes algorithms and approaches for solving key issues related to the construction of the Multilingual Semantic Web. The section on "Applications" describes the use of Multilingual Semantic Web based approaches in the context of several application domains. This volume is essential reading for all academic and industrial researchers who want to embark on this new research field at the intersection of various research topics, including the Semantic Web, Linked Data, natural language processing, computational linguistics, terminology and information retrieval. It will also be of great interest to practitioners who are interested in re-examining their existing infrastructure and methodologies for handling multiple languages in Web applications or information retrieval systems.
This book applies linguistic analysis to the poetry of Emeritus Professor Edwin Thumboo, a Singaporean poet and leading figure in Commonwealth literature. The work explores how the poet combines grammar and metaphor to create meaning, making the reader aware of the linguistic resources developed by Thumboo as the basis for his unique technique. The author approaches the poems from a functional linguistic perspective, investigating the multiple layers of meaning and metaphor that go into producing these highly textured, grammatically intricate verbal works of art. The approach is based on the Systemic Functional Theory, which aids the study of how the poet uses language (grammar) to craft his text in a playful way that reflects a love of the language. The multilingual and multicultural experiences of the poet are considered to have contributed to his uniquely creative use of language. This work demonstrates how the Systemic Functional Theory, with its emphasis on exploring the semogenic (meaning-making) power of language, provides the perspective we need to better understand poets' works as intentional acts of meaning. Readers will discover how the works of Edwin Thumboo illustrate well a point made by Barthes, who noted that "Bits of code, formulae, rhythmic models, fragments of social languages, etc. pass into the text and are redistributed within it, for there is always language before and around the text." With a focus on meaning, this functional analysis of poetry offers an insightful look at the linguistic basis of Edwin Thumboo's poetic technique. The work will appeal to scholars with an interest in linguistic analysis and poetry from the Commonwealth and new literature, and it can also be used to support courses on literary stylistics or text linguistics.
The areas of natural language processing and computational linguistics have continued to grow in recent years, driven by the demand to automatically process text and spoken data. With the processing power and techniques now available, research is scaling up from lab prototypes to real-world, proven applications. This book teaches the principles of natural language processing, first covering practical linguistics issues such as encoding and annotation schemes, defining words, tokens and parts of speech and morphology, as well as key concepts in machine learning, such as entropy, regression and classification, which are used throughout the book. It then details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques, using Prolog to write phase-structure grammars, syntactic formalisms and parsing techniques, semantics, predicate logic and lexical semantics and analysis of discourse and applications in dialogue systems. A key feature of the book is the author's hands-on approach throughout, with sample code in Prolog and Perl, extensive exercises, and a detailed introduction to Prolog. The reader is supported with a companion website that contains teaching slides, programs and additional material. The second edition is a complete revision of the techniques exposed in the book to reflect advances in the field the author redesigned or updated all the chapters, added two new ones and considerably expanded the sections on machine-learning techniques.
This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue is the ultimate challenge in natural language processing, and the key to a wide range of exciting applications. The breadth and depth of coverage of this book makes it suitable as a reference and overview of the state of the field for researchers in Computational Linguistics, Semantics, Computer Science, Cognitive Science, and Artificial Intelligence. ​
This work combines interdisciplinary knowledge and experience from research fields of psychology, linguistics, audio-processing, machine learning, and computer science. The work systematically explores a novel research topic devoted to automated modeling of personality expression from speech. For this aim, it introduces a novel personality assessment questionnaire and presents the results of extensive labeling sessions to annotate the speech data with personality assessments. It provides estimates of the Big 5 personality traits, i.e. openness, conscientiousness, extroversion, agreeableness, and neuroticism. Based on a database built on the questionnaire, the book presents models to tell apart different personality types or classes from speech automatically.
Research in Natural Language Processing (NLP) has rapidly advanced in recent years, resulting in exciting algorithms for sophisticated processing of text and speech in various languages. Much of this work focuses on English; in this book we address another group of interesting and challenging languages for NLP research: the Semitic languages. The Semitic group of languages includes Arabic (206 million native speakers), Amharic (27 million), Hebrew (7 million), Tigrinya (6.7 million), Syriac (1 million) and Maltese (419 thousand). Semitic languages exhibit unique morphological processes, challenging syntactic constructions and various other phenomena that are less prevalent in other natural languages. These challenges call for unique solutions, many of which are described in this book. The 13 chapters presented in this book bring together leading scientists from several universities and research institutes worldwide. While this book devotes some attention to cutting-edge algorithms and techniques, its primary purpose is a thorough explication of best practices in the field. Furthermore, every chapter describes how the techniques discussed apply to Semitic languages. The book covers both statistical approaches to NLP, which are dominant across various applications nowadays and the more traditional, rule-based approaches, that were proven useful for several other application domains. We hope that this book will provide a "one-stop-shop'' for all the requisite background and practical advice when building NLP applications for Semitic languages.
Editors Amy Neustein and Judith A. Markowitz have recruited a talented group of contributors to introduce the next generation of natural language technologies to resolve some of the most vexing natural-language problems that compromise the performance of speech systems today. This fourteen-chapter anthology consists of contributions from industry scientists and from academicians working at major universities in North America and Europe. They include researchers who have played a central role in DARPA-funded programs and developers who craft real-world solutions for corporations. This anthology is aimed at speech engineers, system developers, computer scientists, AI researchers, and others interested in utilizing natural-language technology in both spoken and text-based applications.
Collaboratively Constructed Language Resources (CCLRs) such as Wikipedia, Wiktionary, Linked Open Data, and various resources developed using crowdsourcing techniques such as Games with a Purpose and Mechanical Turk have substantially contributed to the research in natural language processing (NLP). Various NLP tasks utilize such resources to substitute for or supplement conventional lexical semantic resources and linguistically annotated corpora. These resources also provide an extensive body of texts from which valuable knowledge is mined. There are an increasing number of community efforts to link and maintain multiple linguistic resources. This book aims offers comprehensive coverage of CCLR-related topics, including their construction, utilization in NLP tasks, and interlinkage and management. Various Bachelor/Master/Ph.D. programs in natural language processing, computational linguistics, and knowledge discovery can use this book both as the main text and as a supplementary reading. The book also provides a valuable reference guide for researchers and professionals for the above topics.
Complex systems in nature and society make use of information for the development of their internal organization and the control of their functional mechanisms. Alongside technical aspects of storing, transmitting and processing information, the various semantic aspects of information, such as meaning, sense, reference and function, play a decisive part in the analysis of such systems. With the aim of fostering a better understanding of semantic systems from an evolutionary and multidisciplinary perspective, this volume collects contributions by philosophers and natural scientists, linguists, information and computer scientists. They do not follow a single research paradigm; rather they shed, in a complementary way, new light upon some of the most important aspects of the evolution of semantic systems. Evolution of Semantic Systems is intended for researchers in philosophy, computer science, and the natural sciences who work on the analysis or development of semantic systems, ontologies, or similar complex information structures. In the eleven chapters, they will find a broad discussion of topics ranging from underlying universal principles to representation and processing aspects to paradigmatic examples.
This book discusses the Partially Observable Markov Decision Process (POMDP) framework applied in dialogue systems. It presents POMDP as a formal framework to represent uncertainty explicitly while supporting automated policy solving. The authors propose and implement an end-to-end learning approach for dialogue POMDP model components. Starting from scratch, they present the state, the transition model, the observation model and then finally the reward model from unannotated and noisy dialogues. These altogether form a significant set of contributions that can potentially inspire substantial further work. This concise manuscript is written in a simple language, full of illustrative examples, figures, and tables.
This book discusses the contribution of excitation source information in discriminating language. The authors focus on the excitation source component of speech for enhancement of language identification (LID) performance. Language specific features are extracted using two different modes: (i) Implicit processing of linear prediction (LP) residual and (ii) Explicit parameterization of linear prediction residual. The book discusses how in implicit processing approach, excitation source features are derived from LP residual, Hilbert envelope (magnitude) of LP residual and Phase of LP residual; and in explicit parameterization approach, LP residual signal is processed in spectral domain to extract the relevant language specific features. The authors further extract source features from these modes, which are combined for enhancing the performance of LID systems. The proposed excitation source features are also investigated for LID in background noisy environments. Each chapter of this book provides the motivation for exploring the specific feature for LID task, and subsequently discuss the methods to extract those features and finally suggest appropriate models to capture the language specific knowledge from the proposed features. Finally, the book discuss about various combinations of spectral and source features, and the desired models to enhance the performance of LID systems.
This book explores the various categories of speech variation and works to draw a line between linguistic and paralinguistic phenomenon of speech. Paralinguistic contrast is crucial to human speech but has proven to be one of the most difficult tasks in speech systems. In the quest for solutions to speech technology and sciences, this book narrows down the gap between speech technologists and phoneticians and emphasizes the imperative efforts required to accomplish the goal of paralinguistic control in speech technology applications and the acute need for a multidisciplinary categorization system. This interdisciplinary work on paralanguage will not only serve as a source of information but also a theoretical model for linguists, sociologists, psychologists, phoneticians and speech researchers.
In order to exchange knowledge, humans need to share a common lexicon of words as well as to access the world models underlying that lexicon. What is a natural process for a human turns out to be an extremely hard task for a machine: computers can't represent knowledge as effectively as humans do, which hampers, for example, meaning disambiguation and communication. Applied ontologies and NLP have been developed to face these challenges. Integrating ontologies with (possibly multilingual) lexical resources is an essential requirement to make human language understandable by machines, and also to enable interoperability and computability across information systems and, ultimately, in the Web. This book explores recent advances in the integration of ontologies and lexical resources, including questions such as building the required infrastructure (e.g., the Semantic Web) and different formalisms, methods and platforms for eliciting, analyzing and encoding knowledge contents (e.g., multimedia, emotions, events, etc.). The contributors look towards next-generation technologies, shifting the focus from the state of the art to the future of Ontologies and Lexical Resources. This work will be of interest to research scientists, graduate students, and professionals in the fields of knowledge engineering, computational linguistics, and semantic technologies.
The Lexicon provides an introduction to the study of words, their main properties, and how we use them to create meaning. It offers a detailed description of the organizing principles of the lexicon, and of the categories used to classify a wide range of lexical phenomena, including polysemy, meaning variation in composition, and the interplay with ontology, syntax, and pragmatics. Elisabetta Jezek uses empirical data from digitalized corpora and speakers' judgements, combined with the formalisms developed in the field of general and theoretical linguistics, to propose representations for each of these phenomena. The key feature of the book is that it merges theoretical accounts with lexicographic approaches and computational insights. Its clear structure and accessible approach make The Lexicon an ideal textbook for all students of linguistics-theoretical, applied, and computational-and a valuable resource for scholars and students of language in the fields of cognitive science and philosophy.
The book provides an overview of more than a decade of joint R&D efforts in the Low Countries on HLT for Dutch. It not only presents the state of the art of HLT for Dutch in the areas covered, but, even more importantly, a description of the resources (data and tools) for Dutch that have been created are now available for both academia and industry worldwide. The contributions cover many areas of human language technology (for Dutch): corpus collection (including IPR issues) and building (in particular one corpus aiming at a collection of 500M word tokens), lexicology, anaphora resolution, a semantic network, parsing technology, speech recognition, machine translation, text (summaries) generation, web mining, information extraction, and text to speech to name the most important ones. The book also shows how a medium-sized language community (spanning two territories) can create a digital language infrastructure (resources, tools, etc.) as a basis for subsequent R&D. At the same time, it bundles contributions of almost all the HLT research groups in Flanders and the Netherlands, hence offers a view of their recent research activities. Targeted readers are mainly researchers in human language technology, in particular those focusing on Dutch. It concerns researchers active in larger networks such as the CLARIN, META-NET, FLaReNet and participating in conferences such as ACL, EACL, NAACL, COLING, RANLP, CICling, LREC, CLIN and DIR ( both in the Low Countries), InterSpeech, ASRU, ICASSP, ISCA, EUSIPCO, CLEF, TREC, etc. In addition, some chapters are interesting for human language technology policy makers and even for science policy makers in general.
This collection of papers takes linguists to the leading edge of techniques in generative lexicon theory, the linguistic composition methodology that arose from the imperative to provide a compositional semantics for the contextual modifications in meaning that emerge in real linguistic usage. Today's growing shift towards distributed compositional analyses evinces the applicability of GL theory, and the contributions to this volume, presented at three international workshops (GL-2003, GL-2005 and GL-2007) address the relationship between compositionality in language and the mechanisms of selection in grammar that are necessary to maintain this property. The core unresolved issues in compositionality, relating to the interpretation of context and the mechanisms of selection, are treated from varying perspectives within GL theory, including its basic theoretical mechanisms and its analytical viewpoint on linguistic phenomena. |
![]() ![]() You may like...
EAI International Conference on…
Ping Zheng, Vic Callaghan, …
Hardcover
R4,355
Discovery Miles 43 550
Handbook of Integration of Cloud…
Rajiv Ranjan, Karan Mitra, …
Hardcover
R6,309
Discovery Miles 63 090
Long Term Evolution - 4G and Beyond
Alberto Paradisi, Michel Daoud Yacoub, …
Hardcover
Full-Duplex Communications for Future…
Hirley Alves, Taneli Riihonen, …
Hardcover
R2,915
Discovery Miles 29 150
Learning from Imbalanced Data Sets
Alberto Fernandez, Salvador Garcia, …
Hardcover
R4,256
Discovery Miles 42 560
Application of Social Media in Crisis…
Babak Akhgar, Andrew Staniforth, …
Hardcover
R4,245
Discovery Miles 42 450
Matrix and Analytical Methods for…
Valeriy Naumov, Yuliya Gaidamaka, …
Hardcover
R1,563
Discovery Miles 15 630
Federated Learning for Wireless Networks
Choong Seon Hong, Latif U. Khan, …
Hardcover
R4,585
Discovery Miles 45 850
Frontier and Innovation in Future…
James J (Jong Hyuk) Park, Albert Zomaya, …
Hardcover
R8,535
Discovery Miles 85 350
|