![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
The book provides an overview of more than a decade of joint R&D efforts in the Low Countries on HLT for Dutch. It not only presents the state of the art of HLT for Dutch in the areas covered, but, even more importantly, a description of the resources (data and tools) for Dutch that have been created are now available for both academia and industry worldwide. The contributions cover many areas of human language technology (for Dutch): corpus collection (including IPR issues) and building (in particular one corpus aiming at a collection of 500M word tokens), lexicology, anaphora resolution, a semantic network, parsing technology, speech recognition, machine translation, text (summaries) generation, web mining, information extraction, and text to speech to name the most important ones. The book also shows how a medium-sized language community (spanning two territories) can create a digital language infrastructure (resources, tools, etc.) as a basis for subsequent R&D. At the same time, it bundles contributions of almost all the HLT research groups in Flanders and the Netherlands, hence offers a view of their recent research activities. Targeted readers are mainly researchers in human language technology, in particular those focusing on Dutch. It concerns researchers active in larger networks such as the CLARIN, META-NET, FLaReNet and participating in conferences such as ACL, EACL, NAACL, COLING, RANLP, CICling, LREC, CLIN and DIR ( both in the Low Countries), InterSpeech, ASRU, ICASSP, ISCA, EUSIPCO, CLEF, TREC, etc. In addition, some chapters are interesting for human language technology policy makers and even for science policy makers in general. "
The research described in this book shows that conversation analysis can effectively model dialogue. Specifically, this work shows that the multidisciplinary field of communicative ICALL may greatly benefit from including Conversation Analysis. As a consequence, this research makes several contributions to the related research disciplines, such as conversation analysis, second-language acquisition, computer-mediated communication, artificial intelligence, and dialogue systems. The book will be of value for researchers and engineers in the areas of computational linguistics, intelligent assistants, and conversational interfaces.
Semantic fields are lexically coherent - the words they contain co-occur in texts. In this book the authors introduce and define semantic domains, a computational model for lexical semantics inspired by the theory of semantic fields. Semantic domains allow us to exploit domain features for texts, terms and concepts, and they can significantly boost the performance of natural-language processing systems. Semantic domains can be derived from existing lexical resources or can be acquired from corpora in an unsupervised manner. They also have the property of interlinguality, and they can be used to relate terms in different languages in multilingual application scenarios. The authors give a comprehensive explanation of the computational model, with detailed chapters on semantic domains, domain models, and applications of the technique in text categorization, word sense disambiguation, and cross-language text categorization. This book is suitable for researchers and graduate students in computational linguistics.
Solving linguistic problems not infrequently reduces to carrying out tasks that are computationally complex and therefore requires automation. In such situations, the difference between having and not having computational tools to handle the tasks is not a matter of economy of time and effort, but may amount to the difference between finding and not finding a solution at all. The book is an introduction to machine-aided linguistic discovery, a novel research area, arguing for the fruitfulness of the computational approach by presenting a basic conceptual apparatus and several intelligent discovery programmes. One of the systems models the fundamental Saussurian notion of system, and thus, for the first time, after almost a century after the introduction of this concept and structuralism in general, linguists are capable to handle adequately this recurring computationally complex task. Another system models the problem of searching for Greenbergian language universals and is capable of stating its discoveries in an intelligible form, viz. a comprehensive English language text, thus constituting the first computer program to generate a whole scientific article. Yet another system detects potential inconsistencies in genetic language classifications. The programmes are applied with noteworthy results to substantial problems from diverse linguistic disciplines such as structural semantics, phonology, typology and historical linguistics.
Understanding any communication depends on the listener or reader recognizing that some words refer to what has already been said or written (his, its, he, there, etc.). This mode of reference, anaphora, involves complicated cognitive and syntactic processes, which people usually perform unerringly, but which present formidable problems for the linguist and cognitive scientist trying to explain precisely how comprehension is achieved. Anaphora is thus a central research focus in syntactic and semantic theory, while understanding and modelling its operation in discourse are important targets in computational linguistics and cognitive science. Yan Huang provides an extensive and accessible overview of the major contemporary issues surrounding anaphora and gives a critical survey of the many and diverse contemporary approaches to it. He provides by far the fullest cross-linguistic account yet published: Dr Huang's survey and analysis are based on a rich collection of data drawn from around 450 of the world's languages.
An increasing number of contributions have appeared in recent years on the subject of Audiovisual Translation (AVT), particularly in relation to dubbing and subtitling. The broad scope of this branch of Translation Studies is challenging because it brings together diverse disciplines, including film studies, translatology, semiotics, linguistics, applied linguistics, cognitive psychology, technology and ICT. This volume addresses issues relating to AVT research and didactics. The first section is dedicated to theoretical aspects in order to stimulate further debate and encourage progress in research-informed teaching. The second section focuses on a less developed area of research in the field of AVT: its potential use in foreign language pedagogy. This collection of articles is intended to create a discourse on new directions in AVT and foreign language learning. The book begins with reflections on wider methodological issues, advances to a proposed model of analysis for colloquial speech, touches on more 'niche' aspects of AVT (e.g. surtitling), progresses to didactic applications in foreign language pedagogy and learning at both linguistic and cultural levels, and concludes with a practical proposal for the use of AVT in foreign language classes. An interview with a professional subtitler draws the volume to a close.
This volume, composed mainly of papers given at the 1999 conferences of the Forum for German Language Studies (FGLS) at Kent and the Conference of University Teachers of German (CUTG) at Keele, is devoted to differential yet synergetic treatments of the German language. It includes corpus-lexicographical, computational, rigorously phonological, historical/dialectal, comparative, semiotic, acquisitional and pedagogical contributions. In all, a variety of approaches from the rigorously 'pure' and formal to the applied, often feeding off each other to focus on various aspects of the German language.
This book presents a theoretical study on aspect in Chinese, including both situation and viewpoint aspects. Unlike previous studies, which have largely classified linguistic units into different situation types, this study defines a set of ontological event types that are conceptually universal and on the basis of which different languages employ various linguistic devices to describe such events. To do so, it focuses on a particular component of events, namely the viewpoint aspect. It includes and discusses a wealth of examples to show how such ontological events are realized in Chinese. In addition, the study discusses how Chinese modal verbs and adverbs affect the distribution of viewpoint aspects associated with certain situation types. In turn, the book demonstrates how the proposed linguistic theory can be used in a computational context. Simply identifying events in terms of the verbs and their arguments is insufficient for real situations such as understanding the factivity and the logical/temporal relations between events. The proposed framework offers the possibility of analyzing events in Chinese text, yielding deep semantic information.
This book addresses the research, analysis, and description of the methods and processes that are used in the annotation and processing of language corpora in advanced, semi-advanced, and non-advanced languages. It provides the background information and empirical data needed to understand the nature and depth of problems related to corpus annotation and text processing and shows readers how the linguistic elements found in texts are analyzed and applied to develop language technology systems and devices. As such, it offers valuable insights for researchers, educators, and students of linguistics and language technology.
There is hardly any aspect of verbal communication that has not been investigated using the analytical tools developed by corpus linguists. This is especially true in the case of English, which commands a vast international research community, and corpora are becoming increasingly specialised, as they account for areas of language use shaped by specific sociolectal (register, genre, variety) and speaker (gender, profession, status) variables. Corpus analysis is driven by a common interest in 'linguistic evidence', viewed as a source of insights into language phenomena or of lexical, semantic and contrastive data for subsequent applications. Among the latter, pedagogical settings are highly prominent, as corpora can be used to monitor classroom output, raise learner awareness and inform teaching materials. The eighteen chapters in this volume focus on contexts where English is employed by specialists in the professions or academia and debate some of the challenges arising from the complex relationship between linguistic theory, data-mining tools and statistical methods.
This book introduces formal semantics techniques for a natural language processing audience. Methods discussed involve: (i) the denotational techniques used in model-theoretic semantics, which make it possible to determine whether a linguistic expression is true or false with respect to some model of the way things happen to be; and (ii) stages of interpretation, i.e., ways to arrive at meanings by evaluating and converting source linguistic expressions, possibly with respect to contexts, into output (logical) forms that could be used with (i). The book demonstrates that the methods allow wide coverage without compromising the quality of semantic analysis. Access to unrestricted, robust and accurate semantic analysis is widely regarded as an essential component for improving natural language processing tasks, such as: recognizing textual entailment, information extraction, summarization, automatic reply, and machine translation.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of
language processing systems to a much more automated setting than
previous works. A new approach is defined: what if computers
analysed large samples of language data on their own, identifying
structural regularities that perform the necessary abstractions and
generalisations in order to better understand language in the
process? The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
Grammars of natural languages can be expressed as mathematical objects, similar to computer programs. Such a formal presentation of grammars facilitates mathematical reasoning with grammars (and the languages they denote), as well as computational implementation of grammar processors. This book presents one of the most commonly used grammatical formalisms, Unification Grammars, which underlies contemporary linguistic theories such as Lexical-Functional Grammar (LFG) and Head-driven Phrase Structure Grammar (HPSG). The book provides a robust and rigorous exposition of the formalism that is both mathematically well-founded and linguistically motivated. While the material is presented formally, and much of the text is mathematically oriented, a core chapter of the book addresses linguistic applications and the implementation of several linguistic insights in unification grammars. Dozens of examples and numerous exercises (many with solutions) illustrate key points. Graduate students and researchers in both computer science and linguistics will find this book a valuable resource.
This book analyses diverse public discourses to investigate how wealth inequality has been portrayed in the British media from the time of the Second World War to the present day. Using a variety of corpus-assisted methods of discourse analysis, chapters present an historicized perspective on how the mass media have helped to make sharply increased wealth inequality seem perfectly normal. Print, radio and online media sources are interrogated using methodologies grounded in critical discourse analysis, critical stylistics and corpus linguistics in order to examine the influence of the media on the British electorate, who have passively consented to the emergence of an even less egalitarian Britain. Covering topics such as Second World War propaganda, the 'Change4Life' anti-obesity campaign and newspaper, parliamentary and TV news programme attitudes to poverty and austerity, this book will be of value to all those interested in the mass media's contribution to the entrenched inequality in modern Britain.
When something is in focus, light falls on it from different angles. The lexicon can be viewed from different sides. Six views are represented in this volume: a cognitivist view of vagueness and lexicalization, a psycholinguistic view of lexical
What is a language? What do scientific grammars tell us about the structure of individual languages and human language in general? What kind of science is linguistics? These and other questions are the subject of Ryan M. Nefdt's Language, Science, and Structure. Linguistics presents a unique and challenging subject matter for the philosophy of science. As a special science, its formalisation and naturalisation inspired what many consider to be a scientific revolution in the study of mind and language. Yet radical internal theory change, multiple competing frameworks, and issues of modelling and realism have largely gone unaddressed in the field. Nefdt develops a structural realist perspective on the philosophy of linguistics which aims to confront the aforementioned topics in new ways while expanding the outlook toward new scientific connections and novel philosophical insights. On this view, languages are real patterns which emerge from complex biological systems. Nefdt's exploration of this novel view will be especially valuable to those working in formal and computational linguistics, cognitive science, and the philosophies of science, mathematics, and language.
Based on years of instruction and field expertise, this volume
offers the necessary tools to understand all scientific,
computational, and technological aspects of speech processing. The
book emphasizes mathematical abstraction, the dynamics of the
speech process, and the engineering optimization practices that
promote effective problem solving in this area of research and
covers many years of the authors' personal research on speech
processing. Speech Processing helps build valuable analytical
skills to help meet future challenges in scientific and
technological advances in the field and considers the complex
transition from human speech processing to computer speech
processing.
This book presents a method of linking the ordered structure of the cosmos with human thoughts: the theory of language holography. In the view presented here, the cosmos is in harmony with the human body and language, and human thoughts are holographic with the cosmos at the level of language. In a word, the holographic relation is nothing more than the bridge by means of which Guanlian Qian connects the cosmos, human, and language. This is a vitally important contribution to linguistic and philosophical studies that cannot be ignored. The book has two main focus areas: outer language holography and inner language holography. These two areas constitute the core of the dynamic and holistic view put forward in the theory of language holography. The book's main properties can be summarized into the following points: First and foremost, it is a book created in toto by a Chinese scholar devoted to pragmatics, theoretical linguistics, and philosophy of language. Secondly, the book was accepted by a top Chinese publisher and was republished the second year, which reflected its value and appeal. Thirdly, in terms of writing style, the book is characterized by succinctness and logic. As a result, it reads fluidly and smoothly without redundancies, which is not that common in linguistic or even philosophical works. Lastly, as stated by the author in the introduction, "Creation is the development of previous capacities, but it is also the generation of new ones"; this book can be said to put this concept into practice. Overall, the book offers a unique resource to readers around the world who want to know more about the truly original and innovative studies of language in Chinese academia.
The idea that the expression of radical beliefs is a predictor to future acts of political violence has been a central tenet of counter-extremism over the last two decades. Not only has this imposed a duty upon doctors, lecturers and teachers to inform on the radical beliefs of their patients and students but, as this book argues, it is also a fundamentally flawed concept. Informed by his own experience with the UK's Prevent programme while teaching in a Muslim community, Rob Faure Walker explores the linguistic emergence of 'extremism' in political discourse and the potentially damaging generative effect of this language. Taking a new approach which combines critical discourse analysis with critical realism, this book shows how the fear of being labelled as an 'extremist' has resulted in counter-terrorism strategies which actually undermine moderating mechanisms in a democracy. Analysing the generative mechanisms by which the language of counter-extremism might actually promote violence, Faure Walker explains how understanding the potentially oppressive properties of language can help us transcend them. The result is an imminent critique of the most pernicious aspects of the global War on Terror, those that are embedded in our everyday language and political discourse. Drawing on the author's own successful lobbying activities against counter-extremism, this book presents a model for how discourse analysis and critical realism can and should engage with the political and how this will affect meaningful change.
The Language of Design: Theory and Computation articulates the theory that there is a language of design. This theory claims that any language of design consists of a set of symbols, a set of relations between the symbols, features that key the expressiveness of symbols, and a set of reality producing information processing behaviors acting on the language. Drawing upon insights from computational language processing, the language of design is modeled computationally through latent semantic analysis (LSA), lexical chain analysis (LCA), and sentiment analysis (SA). The statistical co-occurrence of semantics (LSA), semantic relations (LCA), and semantic modifiers (SA) in design text are used to illustrate how the reality producing effect of language is itself an enactment of design. This insight leads to a new understanding of the connections between creative behaviors such as design and their linguistic properties. The computation of the language of design makes it possible to make direct measurements of creative behaviors which are distributed across social spaces and mediated through language. The book demonstrates how machine understanding of design texts based on computation over the language of design yields practical applications for design management such as modeling teamwork, characterizing the formation of a design concept, and understanding design rationale. The Language of Design: Theory and Computation is a unique text for postgraduates and researchers studying design theory and management, and allied disciplines such as artificial intelligence, organizational behavior, and human factors and ergonomics.
This book sheds new light on corpus-assisted translation pedagogy, an intersection of three distinct but cognate disciplines: corpus linguistics, translation and pedagogy. By taking an innovative and empirical approach to translation teaching, the study utilizes mixed methods, including translation experiments, surveys and in-depth focus groups. The results demonstrated the unique advantages and at the same time called attention to possible pitfalls of using corpora for translation teaching purposes. This book enriches our understanding of corpus application in the setting of translation between Chinese and English, two languages which are each distinctly different from one another. Readers will also discover new horizons in this burgeoning and interdisciplinary field of research. This book appeals to a broad readership, from scholars and researchers who are interested in translation technology to widen the scope of translation studies, translation trainers in search of effective teaching approaches to a growing number of cross-disciplinary postgraduate students longing to improve their translation skills and competence.
Computers offer new perspectives in the study of language, allowing us to see phenomena that previously remained obscure because of the limitations of our vantage points. It is not uncommon for computers to be likened to the telescope, or microscope, in this respect. In this pioneering computer-assisted study of translation, Dorothy Kenny suggests another image, that of the kaleidoscope: playful changes of perspective using corpus-processing software allow textual patterns to come into focus and then recede again as others take their place. And against the background of repeated patterns in a corpus, creative uses of language gain a particular prominence. In Lexis and Creativity in Translation, Kenny monitors the translation of creative source-text word forms and collocations uncovered in a specially constructed German-English parallel corpus of literary texts. Using an abundance of examples, she reveals evidence of both normalization and ingenious creativity in translation. Her discussion of lexical creativity draws on insights from traditional morphology, structural semantics and, most notably, neo-Firthian corpus linguistics, suggesting that rumours of the demise of linguistics in translation studies are greatly exaggerated. Lexis and Creativity in Translation is essential reading for anyone interested in corpus linguistics and its impact so far on translation studies. The book also offers theoretical and practical guidance for researchers who wish to conduct their own corpus-based investigations of translation. No previous knowledge of German, corpus linguistics or computing is assumed.
This book covers theoretical work, applications, approaches, and techniques for computational models of information and its presentation by language (artificial, human, or natural in other ways). Computational and technological developments that incorporate natural language are proliferating. Adequate coverage encounters difficult problems related to ambiguities and dependency on context and agents (humans or computational systems). The goal is to promote computational systems of intelligent natural language processing and related models of computation, language, thought, mental states, reasoning, and other cognitive processes. |
![]() ![]() You may like...
Advanced Machine Learning Algorithms for…
Mohammad Irfan, Mohamed Elhoseny, …
Hardcover
R7,634
Discovery Miles 76 340
The Temporal Structure of Multimodal…
Laszlo Hunyadi, Istvan Szekrenyes
Hardcover
R3,020
Discovery Miles 30 200
Unmanned Aerial Vehicles and…
Bella Mary I. Thusnavis, K Martin Sagayam, …
Hardcover
R7,619
Discovery Miles 76 190
Novel Bioinspired Actuator Designs for…
Philipp Beckerle, Maziar Ahmad Sharbafi, …
Hardcover
R3,781
Discovery Miles 37 810
AI, IoT, and Blockchain Breakthroughs in…
Kavita Saini, N.S. Gowri Ganesh, …
Hardcover
R6,774
Discovery Miles 67 740
|