![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
Solving linguistic problems not infrequently reduces to carrying out tasks that are computationally complex and therefore requires automation. In such situations, the difference between having and not having computational tools to handle the tasks is not a matter of economy of time and effort, but may amount to the difference between finding and not finding a solution at all. The book is an introduction to machine-aided linguistic discovery, a novel research area, arguing for the fruitfulness of the computational approach by presenting a basic conceptual apparatus and several intelligent discovery programmes. One of the systems models the fundamental Saussurian notion of system, and thus, for the first time, after almost a century after the introduction of this concept and structuralism in general, linguists are capable to handle adequately this recurring computationally complex task. Another system models the problem of searching for Greenbergian language universals and is capable of stating its discoveries in an intelligible form, viz. a comprehensive English language text, thus constituting the first computer program to generate a whole scientific article. Yet another system detects potential inconsistencies in genetic language classifications. The programmes are applied with noteworthy results to substantial problems from diverse linguistic disciplines such as structural semantics, phonology, typology and historical linguistics.
An increasing number of contributions have appeared in recent years on the subject of Audiovisual Translation (AVT), particularly in relation to dubbing and subtitling. The broad scope of this branch of Translation Studies is challenging because it brings together diverse disciplines, including film studies, translatology, semiotics, linguistics, applied linguistics, cognitive psychology, technology and ICT. This volume addresses issues relating to AVT research and didactics. The first section is dedicated to theoretical aspects in order to stimulate further debate and encourage progress in research-informed teaching. The second section focuses on a less developed area of research in the field of AVT: its potential use in foreign language pedagogy. This collection of articles is intended to create a discourse on new directions in AVT and foreign language learning. The book begins with reflections on wider methodological issues, advances to a proposed model of analysis for colloquial speech, touches on more 'niche' aspects of AVT (e.g. surtitling), progresses to didactic applications in foreign language pedagogy and learning at both linguistic and cultural levels, and concludes with a practical proposal for the use of AVT in foreign language classes. An interview with a professional subtitler draws the volume to a close.
This volume, composed mainly of papers given at the 1999 conferences of the Forum for German Language Studies (FGLS) at Kent and the Conference of University Teachers of German (CUTG) at Keele, is devoted to differential yet synergetic treatments of the German language. It includes corpus-lexicographical, computational, rigorously phonological, historical/dialectal, comparative, semiotic, acquisitional and pedagogical contributions. In all, a variety of approaches from the rigorously 'pure' and formal to the applied, often feeding off each other to focus on various aspects of the German language.
This book presents a theoretical study on aspect in Chinese, including both situation and viewpoint aspects. Unlike previous studies, which have largely classified linguistic units into different situation types, this study defines a set of ontological event types that are conceptually universal and on the basis of which different languages employ various linguistic devices to describe such events. To do so, it focuses on a particular component of events, namely the viewpoint aspect. It includes and discusses a wealth of examples to show how such ontological events are realized in Chinese. In addition, the study discusses how Chinese modal verbs and adverbs affect the distribution of viewpoint aspects associated with certain situation types. In turn, the book demonstrates how the proposed linguistic theory can be used in a computational context. Simply identifying events in terms of the verbs and their arguments is insufficient for real situations such as understanding the factivity and the logical/temporal relations between events. The proposed framework offers the possibility of analyzing events in Chinese text, yielding deep semantic information.
This book addresses the research, analysis, and description of the methods and processes that are used in the annotation and processing of language corpora in advanced, semi-advanced, and non-advanced languages. It provides the background information and empirical data needed to understand the nature and depth of problems related to corpus annotation and text processing and shows readers how the linguistic elements found in texts are analyzed and applied to develop language technology systems and devices. As such, it offers valuable insights for researchers, educators, and students of linguistics and language technology.
There is hardly any aspect of verbal communication that has not been investigated using the analytical tools developed by corpus linguists. This is especially true in the case of English, which commands a vast international research community, and corpora are becoming increasingly specialised, as they account for areas of language use shaped by specific sociolectal (register, genre, variety) and speaker (gender, profession, status) variables. Corpus analysis is driven by a common interest in 'linguistic evidence', viewed as a source of insights into language phenomena or of lexical, semantic and contrastive data for subsequent applications. Among the latter, pedagogical settings are highly prominent, as corpora can be used to monitor classroom output, raise learner awareness and inform teaching materials. The eighteen chapters in this volume focus on contexts where English is employed by specialists in the professions or academia and debate some of the challenges arising from the complex relationship between linguistic theory, data-mining tools and statistical methods.
This book introduces formal semantics techniques for a natural language processing audience. Methods discussed involve: (i) the denotational techniques used in model-theoretic semantics, which make it possible to determine whether a linguistic expression is true or false with respect to some model of the way things happen to be; and (ii) stages of interpretation, i.e., ways to arrive at meanings by evaluating and converting source linguistic expressions, possibly with respect to contexts, into output (logical) forms that could be used with (i). The book demonstrates that the methods allow wide coverage without compromising the quality of semantic analysis. Access to unrestricted, robust and accurate semantic analysis is widely regarded as an essential component for improving natural language processing tasks, such as: recognizing textual entailment, information extraction, summarization, automatic reply, and machine translation.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of
language processing systems to a much more automated setting than
previous works. A new approach is defined: what if computers
analysed large samples of language data on their own, identifying
structural regularities that perform the necessary abstractions and
generalisations in order to better understand language in the
process? The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
Grammars of natural languages can be expressed as mathematical objects, similar to computer programs. Such a formal presentation of grammars facilitates mathematical reasoning with grammars (and the languages they denote), as well as computational implementation of grammar processors. This book presents one of the most commonly used grammatical formalisms, Unification Grammars, which underlies contemporary linguistic theories such as Lexical-Functional Grammar (LFG) and Head-driven Phrase Structure Grammar (HPSG). The book provides a robust and rigorous exposition of the formalism that is both mathematically well-founded and linguistically motivated. While the material is presented formally, and much of the text is mathematically oriented, a core chapter of the book addresses linguistic applications and the implementation of several linguistic insights in unification grammars. Dozens of examples and numerous exercises (many with solutions) illustrate key points. Graduate students and researchers in both computer science and linguistics will find this book a valuable resource.
This book analyses diverse public discourses to investigate how wealth inequality has been portrayed in the British media from the time of the Second World War to the present day. Using a variety of corpus-assisted methods of discourse analysis, chapters present an historicized perspective on how the mass media have helped to make sharply increased wealth inequality seem perfectly normal. Print, radio and online media sources are interrogated using methodologies grounded in critical discourse analysis, critical stylistics and corpus linguistics in order to examine the influence of the media on the British electorate, who have passively consented to the emergence of an even less egalitarian Britain. Covering topics such as Second World War propaganda, the 'Change4Life' anti-obesity campaign and newspaper, parliamentary and TV news programme attitudes to poverty and austerity, this book will be of value to all those interested in the mass media's contribution to the entrenched inequality in modern Britain.
When something is in focus, light falls on it from different angles. The lexicon can be viewed from different sides. Six views are represented in this volume: a cognitivist view of vagueness and lexicalization, a psycholinguistic view of lexical
Based on years of instruction and field expertise, this volume
offers the necessary tools to understand all scientific,
computational, and technological aspects of speech processing. The
book emphasizes mathematical abstraction, the dynamics of the
speech process, and the engineering optimization practices that
promote effective problem solving in this area of research and
covers many years of the authors' personal research on speech
processing. Speech Processing helps build valuable analytical
skills to help meet future challenges in scientific and
technological advances in the field and considers the complex
transition from human speech processing to computer speech
processing.
This book presents a method of linking the ordered structure of the cosmos with human thoughts: the theory of language holography. In the view presented here, the cosmos is in harmony with the human body and language, and human thoughts are holographic with the cosmos at the level of language. In a word, the holographic relation is nothing more than the bridge by means of which Guanlian Qian connects the cosmos, human, and language. This is a vitally important contribution to linguistic and philosophical studies that cannot be ignored. The book has two main focus areas: outer language holography and inner language holography. These two areas constitute the core of the dynamic and holistic view put forward in the theory of language holography. The book's main properties can be summarized into the following points: First and foremost, it is a book created in toto by a Chinese scholar devoted to pragmatics, theoretical linguistics, and philosophy of language. Secondly, the book was accepted by a top Chinese publisher and was republished the second year, which reflected its value and appeal. Thirdly, in terms of writing style, the book is characterized by succinctness and logic. As a result, it reads fluidly and smoothly without redundancies, which is not that common in linguistic or even philosophical works. Lastly, as stated by the author in the introduction, "Creation is the development of previous capacities, but it is also the generation of new ones"; this book can be said to put this concept into practice. Overall, the book offers a unique resource to readers around the world who want to know more about the truly original and innovative studies of language in Chinese academia.
The idea that the expression of radical beliefs is a predictor to future acts of political violence has been a central tenet of counter-extremism over the last two decades. Not only has this imposed a duty upon doctors, lecturers and teachers to inform on the radical beliefs of their patients and students but, as this book argues, it is also a fundamentally flawed concept. Informed by his own experience with the UK's Prevent programme while teaching in a Muslim community, Rob Faure Walker explores the linguistic emergence of 'extremism' in political discourse and the potentially damaging generative effect of this language. Taking a new approach which combines critical discourse analysis with critical realism, this book shows how the fear of being labelled as an 'extremist' has resulted in counter-terrorism strategies which actually undermine moderating mechanisms in a democracy. Analysing the generative mechanisms by which the language of counter-extremism might actually promote violence, Faure Walker explains how understanding the potentially oppressive properties of language can help us transcend them. The result is an imminent critique of the most pernicious aspects of the global War on Terror, those that are embedded in our everyday language and political discourse. Drawing on the author's own successful lobbying activities against counter-extremism, this book presents a model for how discourse analysis and critical realism can and should engage with the political and how this will affect meaningful change.
The Language of Design: Theory and Computation articulates the theory that there is a language of design. This theory claims that any language of design consists of a set of symbols, a set of relations between the symbols, features that key the expressiveness of symbols, and a set of reality producing information processing behaviors acting on the language. Drawing upon insights from computational language processing, the language of design is modeled computationally through latent semantic analysis (LSA), lexical chain analysis (LCA), and sentiment analysis (SA). The statistical co-occurrence of semantics (LSA), semantic relations (LCA), and semantic modifiers (SA) in design text are used to illustrate how the reality producing effect of language is itself an enactment of design. This insight leads to a new understanding of the connections between creative behaviors such as design and their linguistic properties. The computation of the language of design makes it possible to make direct measurements of creative behaviors which are distributed across social spaces and mediated through language. The book demonstrates how machine understanding of design texts based on computation over the language of design yields practical applications for design management such as modeling teamwork, characterizing the formation of a design concept, and understanding design rationale. The Language of Design: Theory and Computation is a unique text for postgraduates and researchers studying design theory and management, and allied disciplines such as artificial intelligence, organizational behavior, and human factors and ergonomics.
What is a language? What do scientific grammars tell us about the structure of individual languages and human language in general? What kind of science is linguistics? These and other questions are the subject of Ryan M. Nefdt's Language, Science, and Structure. Linguistics presents a unique and challenging subject matter for the philosophy of science. As a special science, its formalisation and naturalisation inspired what many consider to be a scientific revolution in the study of mind and language. Yet radical internal theory change, multiple competing frameworks, and issues of modelling and realism have largely gone unaddressed in the field. Nefdt develops a structural realist perspective on the philosophy of linguistics which aims to confront the aforementioned topics in new ways while expanding the outlook toward new scientific connections and novel philosophical insights. On this view, languages are real patterns which emerge from complex biological systems. Nefdt's exploration of this novel view will be especially valuable to those working in formal and computational linguistics, cognitive science, and the philosophies of science, mathematics, and language.
This book sheds new light on corpus-assisted translation pedagogy, an intersection of three distinct but cognate disciplines: corpus linguistics, translation and pedagogy. By taking an innovative and empirical approach to translation teaching, the study utilizes mixed methods, including translation experiments, surveys and in-depth focus groups. The results demonstrated the unique advantages and at the same time called attention to possible pitfalls of using corpora for translation teaching purposes. This book enriches our understanding of corpus application in the setting of translation between Chinese and English, two languages which are each distinctly different from one another. Readers will also discover new horizons in this burgeoning and interdisciplinary field of research. This book appeals to a broad readership, from scholars and researchers who are interested in translation technology to widen the scope of translation studies, translation trainers in search of effective teaching approaches to a growing number of cross-disciplinary postgraduate students longing to improve their translation skills and competence.
Computers offer new perspectives in the study of language, allowing us to see phenomena that previously remained obscure because of the limitations of our vantage points. It is not uncommon for computers to be likened to the telescope, or microscope, in this respect. In this pioneering computer-assisted study of translation, Dorothy Kenny suggests another image, that of the kaleidoscope: playful changes of perspective using corpus-processing software allow textual patterns to come into focus and then recede again as others take their place. And against the background of repeated patterns in a corpus, creative uses of language gain a particular prominence. In Lexis and Creativity in Translation, Kenny monitors the translation of creative source-text word forms and collocations uncovered in a specially constructed German-English parallel corpus of literary texts. Using an abundance of examples, she reveals evidence of both normalization and ingenious creativity in translation. Her discussion of lexical creativity draws on insights from traditional morphology, structural semantics and, most notably, neo-Firthian corpus linguistics, suggesting that rumours of the demise of linguistics in translation studies are greatly exaggerated. Lexis and Creativity in Translation is essential reading for anyone interested in corpus linguistics and its impact so far on translation studies. The book also offers theoretical and practical guidance for researchers who wish to conduct their own corpus-based investigations of translation. No previous knowledge of German, corpus linguistics or computing is assumed.
This book covers theoretical work, applications, approaches, and techniques for computational models of information and its presentation by language (artificial, human, or natural in other ways). Computational and technological developments that incorporate natural language are proliferating. Adequate coverage encounters difficult problems related to ambiguities and dependency on context and agents (humans or computational systems). The goal is to promote computational systems of intelligent natural language processing and related models of computation, language, thought, mental states, reasoning, and other cognitive processes.
This book deals with two fundamental issues in the semiotics of the image. The first is the relationship between image and observer: how does one look at an image? To answer this question, this book sets out to transpose the theory of enunciation formulated in linguistics over to the visual field. It also aims to clarify the gains made in contemporary visual semiotics relative to the semiology of Roland Barthes and Emile Benveniste. The second issue addressed is the relation between the forces, forms and materiality of the images. How do different physical mediums (pictorial, photographic and digital) influence visual forms? How does materiality affect the generativity of forms? On the forces within the images, the book addresses the philosophical thought of Gilles Deleuze and Rene Thom as well as the experiment of Aby Warburg's Atlas Mnemosyne. The theories discussed in the book are tested on a variety of corpora for analysis, including both paintings and photographs, taken from traditional as well as contemporary sources in a variety of social sectors (arts and sciences). Finally, semiotic methodology is contrasted with the computational analysis of large collections of images (Big Data), such as the "Media Visualization" analyses proposed by Lev Manovich and Cultural Analytics in the field of Computer Science to evaluate the impact of automatic analysis of visual forms on Digital Art History and more generally on the image sciences.
This handbook is a comprehensive practical resource on corpus linguistics. It features a range of basic and advanced approaches, methods and techniques in corpus linguistics, from corpus compilation principles to quantitative data analyses. The Handbook is organized in six Parts. Parts I to III feature chapters that discuss key issues and the know-how related to various topics around corpus design, methods and corpus types. Parts IV-V aim to offer a user-friendly introduction to the quantitative analysis of corpus data: for each statistical technique discussed, chapters provide a practical guide with R and come with supplementary online material. Part VI focuses on how to write a corpus linguistic paper and how to meta-analyze corpus linguistic research. The volume can serve as a course book as well as for individual study. It will be an essential reading for students of corpus linguistics as well as experienced researchers who want to expand their knowledge of the field.
|
![]() ![]() You may like...
Air Traffic Control Automated Systems
Bestugin A.R., Eshenko A.A., …
Hardcover
R3,395
Discovery Miles 33 950
Unmanned Aerial Vehicles and…
Bella Mary I. Thusnavis, K Martin Sagayam, …
Hardcover
R7,243
Discovery Miles 72 430
Social Big Data Analytics - Practices…
Bilal Abu-Salih, Pornpit Wongthongtham, …
Hardcover
R3,894
Discovery Miles 38 940
Smart Log Data Analytics - Techniques…
Florian Skopik, Markus Wurzenberger, …
Hardcover
R4,237
Discovery Miles 42 370
Advances in 3D Image and Graphics…
Roumen Kountchev, Srikanta Patnaik, …
Hardcover
R5,681
Discovery Miles 56 810
Vehicular Ad Hoc Networks - Futuristic…
Muhammad Arif, Guojun Wang, …
Hardcover
R3,439
Discovery Miles 34 390
|