![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
Solving linguistic problems not infrequently reduces to carrying out tasks that are computationally complex and therefore requires automation. In such situations, the difference between having and not having computational tools to handle the tasks is not a matter of economy of time and effort, but may amount to the difference between finding and not finding a solution at all. The book is an introduction to machine-aided linguistic discovery, a novel research area, arguing for the fruitfulness of the computational approach by presenting a basic conceptual apparatus and several intelligent discovery programmes. One of the systems models the fundamental Saussurian notion of system, and thus, for the first time, after almost a century after the introduction of this concept and structuralism in general, linguists are capable to handle adequately this recurring computationally complex task. Another system models the problem of searching for Greenbergian language universals and is capable of stating its discoveries in an intelligible form, viz. a comprehensive English language text, thus constituting the first computer program to generate a whole scientific article. Yet another system detects potential inconsistencies in genetic language classifications. The programmes are applied with noteworthy results to substantial problems from diverse linguistic disciplines such as structural semantics, phonology, typology and historical linguistics.
"The Yearbook of Corpus Linguistics and Pragmatics" addresses the interface between the two disciplines and offers a platform to scholars who combine both methodologies to present rigorous and interdisciplinary findings about language in real use. Corpus linguistics and Pragmatics have traditionally represented two paths of scientific thought, parallel but often mutually exclusive and excluding. Corpus Linguistics can offer a meticulous methodology based on mathematics and statistics, while Pragmatics is characterized by its effort in the interpretation of intended meaning in real language. This series will give readers insight into how pragmatics can be used to explain real corpus data and also, how corpora can illustrate pragmatic intuitions. The present volume, "Yearbook of Corpus Linguistics and Pragmatics 2014: New Empirical and Theoretical Paradigms in Corpus Pragmatics, " proposes innovative research models in the liaison between pragmatics and corpus linguistics to explain language in current cultural and social contexts.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of
language processing systems to a much more automated setting than
previous works. A new approach is defined: what if computers
analysed large samples of language data on their own, identifying
structural regularities that perform the necessary abstractions and
generalisations in order to better understand language in the
process? The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
Grammars of natural languages can be expressed as mathematical objects, similar to computer programs. Such a formal presentation of grammars facilitates mathematical reasoning with grammars (and the languages they denote), as well as computational implementation of grammar processors. This book presents one of the most commonly used grammatical formalisms, Unification Grammars, which underlies contemporary linguistic theories such as Lexical-Functional Grammar (LFG) and Head-driven Phrase Structure Grammar (HPSG). The book provides a robust and rigorous exposition of the formalism that is both mathematically well-founded and linguistically motivated. While the material is presented formally, and much of the text is mathematically oriented, a core chapter of the book addresses linguistic applications and the implementation of several linguistic insights in unification grammars. Dozens of examples and numerous exercises (many with solutions) illustrate key points. Graduate students and researchers in both computer science and linguistics will find this book a valuable resource.
This book analyses diverse public discourses to investigate how wealth inequality has been portrayed in the British media from the time of the Second World War to the present day. Using a variety of corpus-assisted methods of discourse analysis, chapters present an historicized perspective on how the mass media have helped to make sharply increased wealth inequality seem perfectly normal. Print, radio and online media sources are interrogated using methodologies grounded in critical discourse analysis, critical stylistics and corpus linguistics in order to examine the influence of the media on the British electorate, who have passively consented to the emergence of an even less egalitarian Britain. Covering topics such as Second World War propaganda, the 'Change4Life' anti-obesity campaign and newspaper, parliamentary and TV news programme attitudes to poverty and austerity, this book will be of value to all those interested in the mass media's contribution to the entrenched inequality in modern Britain.
Based on years of instruction and field expertise, this volume
offers the necessary tools to understand all scientific,
computational, and technological aspects of speech processing. The
book emphasizes mathematical abstraction, the dynamics of the
speech process, and the engineering optimization practices that
promote effective problem solving in this area of research and
covers many years of the authors' personal research on speech
processing. Speech Processing helps build valuable analytical
skills to help meet future challenges in scientific and
technological advances in the field and considers the complex
transition from human speech processing to computer speech
processing.
Rapid advances in computing have enabled the integration of corpora into language teaching and learning, yet in China corpus methods have not yet been widely adopted. Corpus Linguistics in Chinese Contexts aims to advance the state of the art in the use of corpora in applied linguistics and contribute to the expertise in corpus use in China.
The Language of Design: Theory and Computation articulates the theory that there is a language of design. This theory claims that any language of design consists of a set of symbols, a set of relations between the symbols, features that key the expressiveness of symbols, and a set of reality producing information processing behaviors acting on the language. Drawing upon insights from computational language processing, the language of design is modeled computationally through latent semantic analysis (LSA), lexical chain analysis (LCA), and sentiment analysis (SA). The statistical co-occurrence of semantics (LSA), semantic relations (LCA), and semantic modifiers (SA) in design text are used to illustrate how the reality producing effect of language is itself an enactment of design. This insight leads to a new understanding of the connections between creative behaviors such as design and their linguistic properties. The computation of the language of design makes it possible to make direct measurements of creative behaviors which are distributed across social spaces and mediated through language. The book demonstrates how machine understanding of design texts based on computation over the language of design yields practical applications for design management such as modeling teamwork, characterizing the formation of a design concept, and understanding design rationale. The Language of Design: Theory and Computation is a unique text for postgraduates and researchers studying design theory and management, and allied disciplines such as artificial intelligence, organizational behavior, and human factors and ergonomics.
Computers offer new perspectives in the study of language, allowing us to see phenomena that previously remained obscure because of the limitations of our vantage points. It is not uncommon for computers to be likened to the telescope, or microscope, in this respect. In this pioneering computer-assisted study of translation, Dorothy Kenny suggests another image, that of the kaleidoscope: playful changes of perspective using corpus-processing software allow textual patterns to come into focus and then recede again as others take their place. And against the background of repeated patterns in a corpus, creative uses of language gain a particular prominence. In Lexis and Creativity in Translation, Kenny monitors the translation of creative source-text word forms and collocations uncovered in a specially constructed German-English parallel corpus of literary texts. Using an abundance of examples, she reveals evidence of both normalization and ingenious creativity in translation. Her discussion of lexical creativity draws on insights from traditional morphology, structural semantics and, most notably, neo-Firthian corpus linguistics, suggesting that rumours of the demise of linguistics in translation studies are greatly exaggerated. Lexis and Creativity in Translation is essential reading for anyone interested in corpus linguistics and its impact so far on translation studies. The book also offers theoretical and practical guidance for researchers who wish to conduct their own corpus-based investigations of translation. No previous knowledge of German, corpus linguistics or computing is assumed.
This handbook is the first to explore the growing field of experimental semantics and pragmatics. In the past 20 years, experimental data has become a major source of evidence for building theories of language meaning and use, encompassing a wide range of topics and methods. Following an introduction from the editors, the chapters in this volume offer an up-to-date account of research in the field spanning 31 different topics, including scalar implicatures, presuppositions, counterfactuals, quantification, metaphor, prosody, and politeness, as well as exploring how and why a particular experimental method is suitable for addressing a given theoretical debate. The volume's forward-looking approach also seeks to actively identify questions and methods that could be fruitfully combined in future experimental research. Written in a clear and accessible style, this handbook will appeal to students and scholars from advanced undergraduate level upwards in a range of fields, including semantics and pragmatics, philosophy of language, psycholinguistics, computational linguistics, cognitive science, and neuroscience.
This volume unpacks an intriguing challenge for the field of media research: combining media research with the study of complex networks. Bringing together research on the small-world idea and digital culture it questions the assumption that we are separated from any other person on the planet by just a few steps, and that this distance decreases within digital social networks. The book argues that the role of languages is decisive to understand how people connect, and it looks at the consequences this has on the ways knowledge spreads digitally. This volume offers a first conceptual venue to analyse emerging phenomena at the innovative intersection of media and complex network research.
This book serves as a starting point for Semantic Web (SW) students and researchers interested in discovering what Natural Language Processing (NLP) has to offer. NLP can effectively help uncover the large portions of data held as unstructured text in natural language, thus augmenting the real content of the Semantic Web in a significant and lasting way. The book covers the basics of NLP, with a focus on Natural Language Understanding (NLU), referring to semantic processing, information extraction and knowledge acquisition, which are seen as the key links between the SW and NLP communities. Major emphasis is placed on mining sentences in search of entities and relations. In the course of this "quest", challenges will be encountered for various text analysis tasks, including part-of-speech tagging, parsing, semantic disambiguation, named entity recognition and relation extraction. Standard algorithms associated with these tasks are presented to provide an understanding of the fundamental concepts. Furthermore, the importance of experimental design and result analysis is emphasized, and accordingly, most chapters include small experiments on corpus data with quantitative and qualitative analysis of the results. This book is divided into four parts. Part I "Searching for Entities in Text" is dedicated to the search for entities in textual data. Next, Part II "Working with Corpora" investigates corpora as valuable resources for NLP work. In turn, Part III "Semantic Grounding and Relatedness" focuses on the process of linking surface forms found in text to entities in resources. Finally, Part IV "Knowledge Acquisition" delves into the world of relations and relation extraction. The book also includes three appendices: "A Look into the Semantic Web" gives a brief overview of the Semantic Web and is intended to bring readers less familiar with the Semantic Web up to speed, so that they too can fully benefit from the material of this book. "NLP Tools and Platforms" provides information about NLP platforms and tools, while "Relation Lists" gathers lists of relations under different categories, showing how relations can be varied and serve different purposes. And finally, the book includes a glossary of over 200 terms commonly used in NLP. The book offers a valuable resource for graduate students specializing in SW technologies and professionals looking for new tools to improve the applicability of SW techniques in everyday life - or, in short, everyone looking to learn about NLP in order to expand his or her horizons. It provides a wealth of information for readers new to both fields, helping them understand the underlying principles and the challenges they may encounter.
Metadata such as the hashtag is an important dimension of social media communication. Despite its important role in practices such as curating, tagging, and searching content, there has been little research into how meanings are made with social metadata. This book considers how hashtags have expanded their reach from an information-locating resource to an interpersonal resource for coordinating social relationships and expressing solidarity, affinity, and affiliation. It adopts a social semiotic perspective to investigate the communicative functions of hashtags in relation to both language and images. This book is a follow up to Zappavigna's 2012 model of ambient affiliation, providing an extended analytical framework for exploring how affiliation occurs, bond by bond, in online discourse. It focuses in particular on the communing function of hashtags in metacommentary and ridicule, using recent Twitter discourse about US President Donald Trump as a case study. It is essential reading for researchers as well as undergraduates studying social media on any academic course.
"Conversation in Context" examines real-life speech data from the British National Corpus to show how language is used in natural conversation. The monograph describes the composition, annotation and transcription of the corpus, as well as providing a discussion of the methodology used in corpus analysis. The book uses a situational framework for conversation and argues that conversation is adapted to constraints set by the situation and to speaker needs arising from these constraints. Such a contextual view reveals a greater complexity to conversation construction than could have been anticipated without the use of corpus-based methods. This book will be of interest to academics researching corpus linguistics, discourse analysis and sociolinguistics.
**Shortlisted for the 2021 BAAL Book Prize for an outstanding book in the field of Applied Linguistics** Situated at the interface of corpus linguistics and health communication, Corpus, Discourse and Mental Health provides insights into the linguistic practices of members of three online support communities as they describe their experiences of living with and managing different mental health problems, including anorexia nervosa, depression and diabulimia. In examining contemporary health communication data, the book combines quantitative corpus linguistic methods with qualitative discourse analysis that draws upon recent theoretical insights from critical health sociology. Using this mixed-methods approach, the analysis identifies patterns and consistencies in the language used by people experiencing psychological distress and their role in realising varying representations of mental illness, diagnosis and treatment. Far from being neutral accounts of suffering and treating illness, corpus analysis illustrates that these interactions are suffused with moral and ideological tensions sufferers seek to collectively negotiate responsibility for the onset and treatment of recalcitrant mental health problems. Integrating corpus linguistics, critical discourse analysis and health sociology, this book showcases the capacity of linguistic analysis for understanding mental health discourse as well as critically exploring the potential of corpus linguistics to offer an evidence-based approach to health communication research.
Language and Politics in Post-Soviet Russia critically examines the uses of language in post-Soviet media and political texts. Drawing on theories from a range of fields, including critical discourse studies, metaphor analysis, media studies, as well as recent developments in corpus linguistics, the book investigates the changing discursive landscape of the political decade between 1998 and 2007. Not yet applied to the linguistic and political situation in post-Soviet Russia, the framework of corpus-assisted discourse analysis offers a rich potential for a systematic and critical interrogation of discursive practices that characterized Russian politics shortly before and during the president Vladimir Putin's first two terms in office. The corpus-based and contextually grounded analyses of loanwords and metaphors allow the author to reveal changes and continuities in the subtle interplay between language and politics in post-Soviet Russia.
This textbook gives a systematized and compact summary, providing the most essential types of modern models for languages and computation together with their properties and applications. Most of these models properly reflect and formalize current computational methods, based on parallelism, distribution and cooperation covered in this book. As a result, it allows the user to develop, study, and improve these methods very effectively. This textbook also represents the first systematic treatment of modern language models for computation. It covers all essential theoretical topics concerning them. From a practical viewpoint, it describes various concepts, methods, algorithms, techniques, and software units based upon these models. Based upon them, it describes several applications in biology, linguistics, and computer science. Advanced-level students studying computer science, mathematics, linguistics and biology will find this textbook a valuable resource. Theoreticians, practitioners and researchers working in today's theory of computation and its applications will also find this book essential as a reference.
This is the third, newly revised and extended edition of this successful book (that has already been translated into three languages). Like the previous editions, it is entirely based on the programming language and environment R and is still thoroughly hands-on (with thousands of lines of heavily annotated code for all computations and plots). However, this edition has been updated based on many workshops/bootcamps taught by the author all over the world for the past few years: This edition has been didactically streamlined with regard to its exposition, it adds two new chapters - one on mixed-effects modeling, one on classification and regression trees as well as random forests - plus it features new discussion of curvature, orthogonal and other contrasts, interactions, collinearity, the effects and emmeans packages, autocorrelation/runs, some more bits on programming, writing statistical functions, and simulations, and many practical tips based on 10 years of teaching with these materials.
An increasing number of contributions have appeared in recent years on the subject of Audiovisual Translation (AVT), particularly in relation to dubbing and subtitling. The broad scope of this branch of Translation Studies is challenging because it brings together diverse disciplines, including film studies, translatology, semiotics, linguistics, applied linguistics, cognitive psychology, technology and ICT. This volume addresses issues relating to AVT research and didactics. The first section is dedicated to theoretical aspects in order to stimulate further debate and encourage progress in research-informed teaching. The second section focuses on a less developed area of research in the field of AVT: its potential use in foreign language pedagogy. This collection of articles is intended to create a discourse on new directions in AVT and foreign language learning. The book begins with reflections on wider methodological issues, advances to a proposed model of analysis for colloquial speech, touches on more 'niche' aspects of AVT (e.g. surtitling), progresses to didactic applications in foreign language pedagogy and learning at both linguistic and cultural levels, and concludes with a practical proposal for the use of AVT in foreign language classes. An interview with a professional subtitler draws the volume to a close.
Empirical translation studies is a rapidly evolving research area. This volume, written by world-leading researchers, demonstrates the integration of two new research paradigms: socially-oriented and data driven approaches to empirical translation studies. These two models expand current translation studies and stimulate reader debates around how development of quantitative research methods and integration with advances in translation technologies would significantly increase the research capacities of translation studies. Highly engaging, the volume pioneers the development of socially-oriented innovative research methods to enhance the current research capacities of theoretical (descriptive) translation studies in order to tackle real-life research issues, such as environmental protection and multicultural health promotion. Illustrative case studies are used, bringing insight into advanced research methodologies of designing, developing and analysing large scale digital databases for multilingual and/or translation research.
The idea that the expression of radical beliefs is a predictor to future acts of political violence has been a central tenet of counter-extremism over the last two decades. Not only has this imposed a duty upon doctors, lecturers and teachers to inform on the radical beliefs of their patients and students but, as this book argues, it is also a fundamentally flawed concept. Informed by his own experience with the UK's Prevent programme while teaching in a Muslim community, Rob Faure Walker explores the linguistic emergence of 'extremism' in political discourse and the potentially damaging generative effect of this language. Taking a new approach which combines critical discourse analysis with critical realism, this book shows how the fear of being labelled as an 'extremist' has resulted in counter-terrorism strategies which actually undermine moderating mechanisms in a democracy. Analysing the generative mechanisms by which the language of counter-extremism might actually promote violence, Faure Walker explains how understanding the potentially oppressive properties of language can help us transcend them. The result is an imminent critique of the most pernicious aspects of the global War on Terror, those that are embedded in our everyday language and political discourse. Drawing on the author's own successful lobbying activities against counter-extremism, this book presents a model for how discourse analysis and critical realism can and should engage with the political and how this will affect meaningful change.
|
You may like...
Spelling and Writing Words - Theoretical…
Cyril Perret, Thierry Olive
Hardcover
R2,803
Discovery Miles 28 030
Artificial Intelligence for Healthcare…
Boris Galitsky, Saveli Goldberg
Paperback
R2,991
Discovery Miles 29 910
The Natural Language for Artificial…
Dioneia Motta Monte-Serrat, Carlo Cattani
Paperback
R2,767
Discovery Miles 27 670
Corpus Stylistics in Heart of Darkness…
Lorenzo Mastropierro
Hardcover
R4,312
Discovery Miles 43 120
|