![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
Users of natural languages have many word orders with which to encode the same truth-conditional meaning. They choose contextually appropriate strings from these many ways with little conscious effort and with effective communicative results. Previous computational models of when English speakers produce non-canonical word orders, like topicalization, left-dislocation, and clefts, fail-either by overgenerating these statistically rare forms or by undergenerating. The primary goal of this book is to present a better model of when speakers choose to produce certain non-canonical word orders by incorporating the effects of discourse context and speaker goals on syntactic choice. The theoretical model is then used as a basis for building a probabilistic classifier that can select the most human-like word order based on the surrounding discourse context. The model of discourse context used is a methodological advance both from a theoretical and an engineering perspective. It is built up from individual linguistic features, ones more easily and reliably annotated than the direct annotation of a discourse or rhetorical structure for a text. This book makes extensive use of previously unexamined naturally occurring corpus data of non-canonical word order in English, both to illustrate the points of the theoretical model and to train the statistical model.
When something is in focus, light falls on it from different angles. The lexicon can be viewed from different sides. Six views are represented in this volume: a cognitivist view of vagueness and lexicalization, a psycholinguistic view of lexical
Computers offer new perspectives in the study of language, allowing us to see phenomena that previously remained obscure because of the limitations of our vantage points. It is not uncommon for computers to be likened to the telescope, or microscope, in this respect. In this pioneering computer-assisted study of translation, Dorothy Kenny suggests another image, that of the kaleidoscope: playful changes of perspective using corpus-processing software allow textual patterns to come into focus and then recede again as others take their place. And against the background of repeated patterns in a corpus, creative uses of language gain a particular prominence. In Lexis and Creativity in Translation, Kenny monitors the translation of creative source-text word forms and collocations uncovered in a specially constructed German-English parallel corpus of literary texts. Using an abundance of examples, she reveals evidence of both normalization and ingenious creativity in translation. Her discussion of lexical creativity draws on insights from traditional morphology, structural semantics and, most notably, neo-Firthian corpus linguistics, suggesting that rumours of the demise of linguistics in translation studies are greatly exaggerated. Lexis and Creativity in Translation is essential reading for anyone interested in corpus linguistics and its impact so far on translation studies. The book also offers theoretical and practical guidance for researchers who wish to conduct their own corpus-based investigations of translation. No previous knowledge of German, corpus linguistics or computing is assumed.
The techniques of natural language processing (NLP) have been
widely applied in machine translation and automated message
understanding, but have only recently been utilized in second
language teaching. This book offers both an argument for and a
critical examination of this new application, with an examination
of how systems may be designed to exploit the power of NLP,
accomodate its limitations, and minimize its risks. This volume
marks the first collection of work in the U.S. and Canada that
incorporates advanced human language technologies into language
tutoring systems, covering languages as diverse as Arabic, Spanish,
Japanese, and English.
The techniques of natural language processing (NLP) have been
widely applied in machine translation and automated message
understanding, but have only recently been utilized in second
language teaching. This book offers both an argument for and a
critical examination of this new application, with an examination
of how systems may be designed to exploit the power of NLP,
accomodate its limitations, and minimize its risks. This volume
marks the first collection of work in the U.S. and Canada that
incorporates advanced human language technologies into language
tutoring systems, covering languages as diverse as Arabic, Spanish,
Japanese, and English.
"A Journey Through Cultures" addresses one of the hottest topics in contemporary HCI: cultural diversity amongst users. For a number of years the HCI community has been investigating alternatives to enhance the design of cross-cultural systems. Most contributions to date have followed either a 'design for each' or a 'design for all' strategy. "A Journey Through Cultures "takes a very different approach. Proponents of CVM - the Cultural Viewpoint Metaphors perspective - the authors invite HCI practitioners to think of how to expose and communicate the idea of cultural diversity. A detailed case study is included which assesses the metaphors' potential in cross-cultural design and evaluation. The results show that cultural viewpoint metaphors have strong epistemic power, leveraged by a combination of theoretic foundations coming from Anthropology, Semiotics and the authors' own work in HCI and Semiotic Engineering. Luciana Salgado, Carla Leitao and Clarisse de Souza are members of SERG, the Semiotic Engineering Research Group at the Departamento de Informatica of Rio de Janeiro's Pontifical Catholic University (PUC-Rio)."
Contemporary corpus linguists use a wide variety of methods to study discourse patterns. This volume provides a systematic comparison of various methodological approaches in corpus linguistics through a series of parallel empirical studies that use a single corpus dataset to answer the same overarching research question. Ten contributing experts each use a different method to address the same broadly framed research question: In what ways does language use in online Q+A forum responses differ across four world English varieties (India, Philippines, United Kingdom, and United States)? Contributions will be based on analysis of the same 400,000 word corpus from online Q+A forums, and contributors employ methodologies including corpus-based discourse analysis, audience perceptions, Multi-Dimensional analysis, pragmatic analysis, and keyword analysis. In their introductory and concluding chapters, the volume editors compare and contrast the findings from each method and assess the degree to which 'triangulating' multiple approaches may provide a more nuanced understanding of a research question, with the aim of identifying a set of complementary approaches which could arguably take into account analytical blind spots. Baker and Egbert also consider the importance of issues such as researcher subjectivity, type of annotation, the limitations and affordances of different corpus tools, the relative strengths of qualitative and quantitative approaches, and the value of considering data or information beyond the corpus. Rather than attempting to find the 'best' approach, the focus of the volume is on how different corpus linguistic methodologies may complement one another, and raises suggestions for further methodological studies which use triangulation to enrich corpus-related research.
This book focuses mainly on logical approaches to computational linguistics, but also discusses integrations with other approaches, presenting both classic and newly emerging theories and applications.Decades of research on theoretical work and practical applications have demonstrated that computational linguistics is a distinctively interdisciplinary area. There is convincing evidence that computational approaches to linguistics can benefit from research on the nature of human language, including from the perspective of its evolution. This book addresses various topics in computational theories of human language, covering grammar, syntax, and semantics. The common thread running through the research presented is the role of computer science, mathematical logic and other subjects of mathematics in computational linguistics and natural language processing (NLP). Promoting intelligent approaches to artificial intelligence (AI) and NLP, the book is intended for researchers and graduate students in the field.
This collection of papers and abstracts stems from the third meeting in the series of Sperlonga workshops on Cognitive Models of Speech Processing. It presents current research on the structure and organization of the mental lexicon, and on the processes that access that lexicon. The volume starts with discussion of issues in acquisition and consideration of questions such as, 'What is the relationship between vocabulary growth and the acquisition of syntax?', and, 'How does prosodic information, concerning the melodies and rhythms of the language, influence the processes of lexical and syntactic acquisition?'. From acquisition, the papers move on to consider the manner in which contemporary models of spoken word recognition and production can map onto neural models of the recognition and production processes. The issue of exactly what is recognised, and when, is dealt with next - the empirical findings suggest that the function of something to which a word refers is accessed with a different time-course to the form of that something. This has considerable implications for the nature, and content, of lexical representations. Equally important are the findings from the studies of disordered lexical processing, and two papers in this volume address the implications of these disorders for models of lexical representation and process (borrowing from both empirical data and computational modelling). The final paper explores whether neural networks can successfully model certain lexical phenomena that have elsewhere been assumed to require rule-based processes.
Multi-Dimensional Analysis: Research Methods and Current Issues provides a comprehensive guide both to the statistical methods in Multi-Dimensional Analysis (MDA) and its key elements, such as corpus building, tagging, and tools. The major goal is to explain the steps involved in the method so that readers may better understand this complex research framework and conduct MD research on their own. Multi-Dimensional Analysis is a method that allows the researcher to describe different registers (textual varieties defined by their social use) such as academic settings, regional discourse, social media, movies, and pop songs. Through multivariate statistical techniques, MDA identifies complementary correlation groupings of dozens of variables, including variables which belong both to the grammatical and semantic domains. Such groupings are then associated with situational variables of texts like information density, orality, and narrativity to determine linguistic constructs known as dimensions of variation, which provide a scale for the comparison of a large number of texts and registers. This book is a comprehensive research guide to MDA.
The Language of ICT: * explores the nature of the electronic word and presents the new types of text in which it is found * examines the impact of the rapid technological change we are living through * analyses different texts, including email and answerphone messages, webpages, faxes, computer games and articles about IT * provides detailed guidance on downloading material from the web, gives URLs to visit, and includes a dedicated webpage * includes a comprehensive glossary of terms.
This book is a description of some of the most recent advances in text classification as part of a concerted effort to achieve computer understanding of human language. In particular, it addresses state-of-the-art developments in the computation of higher-level linguistic features, ranging from etymology to grammar and syntax for the practical task of text classification according to genres, registers and subject domains. Serving as a bridge between computational methods and sophisticated linguistic analysis, this book will be of particular interest to academics and students of computational linguistics as well as professionals in natural language engineering.
Now in its second edition, this volume provides an up to date, accessible, yet authoritative introduction to feedback on second language writing for upper undergraduate and postgraduate students, teachers and researchers in TESOL, applied linguistics, composition studies and English for academic purposes (EAP). Chapters written by leading experts emphasise the potential that feedback has for helping to create a supportive teaching environment, for conveying and modelling ideas about good writing, for developing the ways students talk about writing, and for mediating the relationship between students' wider cultural and social worlds and their growing familiarity with new literacy practices. In addition to updated chapters from the first edition, this edition includes new chapters which focus on new and developing areas of feedback research including student engagement and participation with feedback, the links between SLA and feedback research, automated computer feedback and the use by students of internet resources and social media as feedback resources.
This study analyzes passive sentences in English and Portuguese which result from a post-semantic transformation applied when a nound, which does not play the semantic role of actor, is chosen as syntactic subject. Choice between a passive and its non-passive or active counterpart reflects differences in the distribution of information in the sentence as regards the relative importance of the latter's constituents for communication. Such distribution is analyzed in terms of Praque school theory, especially that involving the notions of communicative dynamism and the distribution of theme and rheme. The book concludes with a contrastive analysis of English and Portuguese passive sentence patterns which serves as the basis for observations on the teaching of Portuguese passives to native speakers of English.
Stress and accent are central, organizing features of grammar, but their precise nature continues to be a source of mystery and wonder. These issues come to the forefront in acquisition, where the tension between the abstract mental representations and the concrete physical manifestations of stress and accent is deeply reflected. Understanding the nature of the representations of stress and accent patterns, and understanding how stress and accent patterns are learned, informs all aspects of linguistic theory and language acquisition. These two themes - representation and acquisition - form the organizational backbone of this book. Each is addressed along different dimensions of stress and accent, including the position of an accent or stress within various prosodic domains and the acoustic dimensions along which the pronunciation of stress and accent may vary. The research presented in the book is multidisciplinary, encompassing theoretical linguistics, speech science, and computational and experimental research.
The research described in this book shows that conversation analysis can effectively model dialogue. Specifically, this work shows that the multidisciplinary field of communicative ICALL may greatly benefit from including Conversation Analysis. As a consequence, this research makes several contributions to the related research disciplines, such as conversation analysis, second-language acquisition, computer-mediated communication, artificial intelligence, and dialogue systems. The book will be of value for researchers and engineers in the areas of computational linguistics, intelligent assistants, and conversational interfaces.
Semantic fields are lexically coherent - the words they contain co-occur in texts. In this book the authors introduce and define semantic domains, a computational model for lexical semantics inspired by the theory of semantic fields. Semantic domains allow us to exploit domain features for texts, terms and concepts, and they can significantly boost the performance of natural-language processing systems. Semantic domains can be derived from existing lexical resources or can be acquired from corpora in an unsupervised manner. They also have the property of interlinguality, and they can be used to relate terms in different languages in multilingual application scenarios. The authors give a comprehensive explanation of the computational model, with detailed chapters on semantic domains, domain models, and applications of the technique in text categorization, word sense disambiguation, and cross-language text categorization. This book is suitable for researchers and graduate students in computational linguistics.
Rapid advances in computing have enabled the integration of corpora into language teaching and learning, yet in China corpus methods have not yet been widely adopted. Corpus Linguistics in Chinese Contexts aims to advance the state of the art in the use of corpora in applied linguistics and contribute to the expertise in corpus use in China.
"The Yearbook of Corpus Linguistics and Pragmatics" addresses the interface between the two disciplines and offers a platform to scholars who combine both methodologies to present rigorous and interdisciplinary findings about language in real use. Corpus linguistics and Pragmatics have traditionally represented two paths of scientific thought, parallel but often mutually exclusive and excluding. Corpus Linguistics can offer a meticulous methodology based on mathematics and statistics, while Pragmatics is characterized by its effort in the interpretation of intended meaning in real language. This series will give readers insight into how pragmatics can be used to explain real corpus data and also, how corpora can illustrate pragmatic intuitions. The present volume, "Yearbook of Corpus Linguistics and Pragmatics 2014: New Empirical and Theoretical Paradigms in Corpus Pragmatics, " proposes innovative research models in the liaison between pragmatics and corpus linguistics to explain language in current cultural and social contexts.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of
language processing systems to a much more automated setting than
previous works. A new approach is defined: what if computers
analysed large samples of language data on their own, identifying
structural regularities that perform the necessary abstractions and
generalisations in order to better understand language in the
process? The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
This book analyses diverse public discourses to investigate how wealth inequality has been portrayed in the British media from the time of the Second World War to the present day. Using a variety of corpus-assisted methods of discourse analysis, chapters present an historicized perspective on how the mass media have helped to make sharply increased wealth inequality seem perfectly normal. Print, radio and online media sources are interrogated using methodologies grounded in critical discourse analysis, critical stylistics and corpus linguistics in order to examine the influence of the media on the British electorate, who have passively consented to the emergence of an even less egalitarian Britain. Covering topics such as Second World War propaganda, the 'Change4Life' anti-obesity campaign and newspaper, parliamentary and TV news programme attitudes to poverty and austerity, this book will be of value to all those interested in the mass media's contribution to the entrenched inequality in modern Britain.
The relation between ontologies and language is currently at the forefront of natural language processing (NLP). Ontologies, as widely used models in semantic technologies, have much in common with the lexicon. A lexicon organizes words as a conventional inventory of concepts, while an ontology formalizes concepts and their logical relations. A shared lexicon is the prerequisite for knowledge-sharing through language, and a shared ontology is the prerequisite for knowledge-sharing through information technology. In building models of language, computational linguists must be able to accurately map the relations between words and the concepts that they can be linked to. This book focuses on the technology involved in enabling integration between lexical resources and semantic technologies. It will be of interest to researchers and graduate students in NLP, computational linguistics, and knowledge engineering, as well as in semantics, psycholinguistics, lexicology and morphology/syntax.
Based on years of instruction and field expertise, this volume
offers the necessary tools to understand all scientific,
computational, and technological aspects of speech processing. The
book emphasizes mathematical abstraction, the dynamics of the
speech process, and the engineering optimization practices that
promote effective problem solving in this area of research and
covers many years of the authors' personal research on speech
processing. Speech Processing helps build valuable analytical
skills to help meet future challenges in scientific and
technological advances in the field and considers the complex
transition from human speech processing to computer speech
processing.
The Language of Design: Theory and Computation articulates the theory that there is a language of design. This theory claims that any language of design consists of a set of symbols, a set of relations between the symbols, features that key the expressiveness of symbols, and a set of reality producing information processing behaviors acting on the language. Drawing upon insights from computational language processing, the language of design is modeled computationally through latent semantic analysis (LSA), lexical chain analysis (LCA), and sentiment analysis (SA). The statistical co-occurrence of semantics (LSA), semantic relations (LCA), and semantic modifiers (SA) in design text are used to illustrate how the reality producing effect of language is itself an enactment of design. This insight leads to a new understanding of the connections between creative behaviors such as design and their linguistic properties. The computation of the language of design makes it possible to make direct measurements of creative behaviors which are distributed across social spaces and mediated through language. The book demonstrates how machine understanding of design texts based on computation over the language of design yields practical applications for design management such as modeling teamwork, characterizing the formation of a design concept, and understanding design rationale. The Language of Design: Theory and Computation is a unique text for postgraduates and researchers studying design theory and management, and allied disciplines such as artificial intelligence, organizational behavior, and human factors and ergonomics.
Computers offer new perspectives in the study of language, allowing us to see phenomena that previously remained obscure because of the limitations of our vantage points. It is not uncommon for computers to be likened to the telescope, or microscope, in this respect. In this pioneering computer-assisted study of translation, Dorothy Kenny suggests another image, that of the kaleidoscope: playful changes of perspective using corpus-processing software allow textual patterns to come into focus and then recede again as others take their place. And against the background of repeated patterns in a corpus, creative uses of language gain a particular prominence. In Lexis and Creativity in Translation, Kenny monitors the translation of creative source-text word forms and collocations uncovered in a specially constructed German-English parallel corpus of literary texts. Using an abundance of examples, she reveals evidence of both normalization and ingenious creativity in translation. Her discussion of lexical creativity draws on insights from traditional morphology, structural semantics and, most notably, neo-Firthian corpus linguistics, suggesting that rumours of the demise of linguistics in translation studies are greatly exaggerated. Lexis and Creativity in Translation is essential reading for anyone interested in corpus linguistics and its impact so far on translation studies. The book also offers theoretical and practical guidance for researchers who wish to conduct their own corpus-based investigations of translation. No previous knowledge of German, corpus linguistics or computing is assumed. |
You may like...
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
R884
Discovery Miles 8 840
Pragmatic Issues in Specialized…
Francesca Bianchi, Sara Gesuato
Paperback
R2,288
Discovery Miles 22 880
Spelling and Writing Words - Theoretical…
Cyril Perret, Thierry Olive
Hardcover
R3,256
Discovery Miles 32 560
The Temporal Structure of Multimodal…
Laszlo Hunyadi, Istvan Szekrenyes
Hardcover
R2,653
Discovery Miles 26 530
From Data to Evidence in English…
Carla Suhr, Terttu Nevalainen, …
Hardcover
R4,582
Discovery Miles 45 820
Corpus Stylistics in Heart of Darkness…
Lorenzo Mastropierro
Hardcover
R4,635
Discovery Miles 46 350
Artificial Intelligence for Healthcare…
Boris Galitsky, Saveli Goldberg
Paperback
R2,991
Discovery Miles 29 910
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,569
Discovery Miles 45 690
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, …
Paperback
R2,570
Discovery Miles 25 700
|