![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This volume explores intercultural communication in specialist fields and its realisations in language for specific purposes. Special attention is given to legal, commercial, political and institutional discourse used in particular workplaces, analysed from an intercultural perspective. The contributions explore to what extent intercultural pressure leads to particular discourse patternings and lexico-grammatical/phonological realisations, and also the extent to which textual re-encoding and recontextualisation alter the pragmatic value of the texts taken into consideration.
This volume reflects the results of a workshop on the investigation of specialized discourse in a diachronic perspective, held within the 15th European Symposium on Language for Special Purposes ('New Trends in Specialized Discourse', Bergamo 2005). The articles deal with developments from the late medieval period to the present day, and the book encompasses studies in which the long-established tradition of domain-specific English is highlighted. The fields of contributions range from scientific to legal to political and business discourse. Special attention is given to argumentation, in an attempt to assess the time-depth of typical rhetorical strategies. Some methodological innovations are introduced in corpus linguistics. Numerous contributions bring new materials to scholarly discussion, as recently released or in-progress 'second-generation' corpora are used as data. Recent changes in present-day legal and scientific writing are also discussed as they witness fast adaptation to new requirements, due to the advent and growing familiarity of new technologies, international law and changes in academia.
Corpus-based studies of diachronic English have been thriving over the last three decades to such an extent that the validity of corpora in the enrichment of historical linguistic research is now undeniable. The present book is a collection of papers illustrating the state of the art in corpus-based research on diachronic English, by means of case-study expositions, software presentations, and theoretical discussions on the topic. The majority of these papers were delivered at the
This book presents the first computer program, called KINSHIP, automating the task of componential analysis of kinship vocabularies. KINSHIP accepts as input the kin terms of a language with their attendant kin types and can produce all alternative componential models of a kinship system, including the most parsimonious one, using the minimum number of dimensions and components in a kin term definition. A further simplicity constraint ensures the coordination between kin term definitions. Inspecting previous practices of the method of componential analysis reveals two basic problems in published models: (1) the commonly occurring inconsistency of componential models (violating necessity or sufficiency conditions of kin term definitions), (2) the huge number of alternative componential models. The application of KINSHIP with its simplicity constraints successfully solves both these problems. The utility of the program is illustrated on complete data sets from more than a dozen languages from Indo-European and non-Indo-European origin.
This book presents established and state-of-the-art methods in Language Technology (including text mining, corpus linguistics, computational linguistics, and natural language processing), and demonstrates how they can be applied by humanities scholars working with textual data. The landscape of humanities research has recently changed thanks to the proliferation of big data and large textual collections such as Google Books, Early English Books Online, and Project Gutenberg. These resources have yet to be fully explored by new generations of scholars, and the authors argue that Language Technology has a key role to play in the exploration of large-scale textual data. The authors use a series of illustrative examples from various humanistic disciplines (mainly but not exclusively from History, Classics, and Literary Studies) to demonstrate basic and more complex use-case scenarios. This book will be useful to graduate students and researchers in humanistic disciplines working with textual data, including History, Modern Languages, Literary studies, Classics, and Linguistics. This is also a very useful book for anyone teaching or learning Digital Humanities and interested in the basic concepts from computational linguistics, corpus linguistics, and natural language processing.
This volume, composed mainly of papers given at the 1999 conferences of the Forum for German Language Studies (FGLS) at Kent and the Conference of University Teachers of German (CUTG) at Keele, is devoted to differential yet synergetic treatments of the German language. It includes corpus-lexicographical, computational, rigorously phonological, historical/dialectal, comparative, semiotic, acquisitional and pedagogical contributions. In all, a variety of approaches from the rigorously 'pure' and formal to the applied, often feeding off each other to focus on various aspects of the German language.
This work combines interdisciplinary knowledge and experience from research fields of psychology, linguistics, audio-processing, machine learning, and computer science. The work systematically explores a novel research topic devoted to automated modeling of personality expression from speech. For this aim, it introduces a novel personality assessment questionnaire and presents the results of extensive labeling sessions to annotate the speech data with personality assessments. It provides estimates of the Big 5 personality traits, i.e. openness, conscientiousness, extroversion, agreeableness, and neuroticism. Based on a database built on the questionnaire, the book presents models to tell apart different personality types or classes from speech automatically.
Users of natural languages have many word orders with which to encode the same truth-conditional meaning. They choose contextually appropriate strings from these many ways with little conscious effort and with effective communicative results. Previous computational models of when English speakers produce non-canonical word orders, like topicalization, left-dislocation, and clefts, fail-either by overgenerating these statistically rare forms or by undergenerating. The primary goal of this book is to present a better model of when speakers choose to produce certain non-canonical word orders by incorporating the effects of discourse context and speaker goals on syntactic choice. The theoretical model is then used as a basis for building a probabilistic classifier that can select the most human-like word order based on the surrounding discourse context. The model of discourse context used is a methodological advance both from a theoretical and an engineering perspective. It is built up from individual linguistic features, ones more easily and reliably annotated than the direct annotation of a discourse or rhetorical structure for a text. This book makes extensive use of previously unexamined naturally occurring corpus data of non-canonical word order in English, both to illustrate the points of the theoretical model and to train the statistical model.
When something is in focus, light falls on it from different angles. The lexicon can be viewed from different sides. Six views are represented in this volume: a cognitivist view of vagueness and lexicalization, a psycholinguistic view of lexical
Computers offer new perspectives in the study of language, allowing us to see phenomena that previously remained obscure because of the limitations of our vantage points. It is not uncommon for computers to be likened to the telescope, or microscope, in this respect. In this pioneering computer-assisted study of translation, Dorothy Kenny suggests another image, that of the kaleidoscope: playful changes of perspective using corpus-processing software allow textual patterns to come into focus and then recede again as others take their place. And against the background of repeated patterns in a corpus, creative uses of language gain a particular prominence. In Lexis and Creativity in Translation, Kenny monitors the translation of creative source-text word forms and collocations uncovered in a specially constructed German-English parallel corpus of literary texts. Using an abundance of examples, she reveals evidence of both normalization and ingenious creativity in translation. Her discussion of lexical creativity draws on insights from traditional morphology, structural semantics and, most notably, neo-Firthian corpus linguistics, suggesting that rumours of the demise of linguistics in translation studies are greatly exaggerated. Lexis and Creativity in Translation is essential reading for anyone interested in corpus linguistics and its impact so far on translation studies. The book also offers theoretical and practical guidance for researchers who wish to conduct their own corpus-based investigations of translation. No previous knowledge of German, corpus linguistics or computing is assumed.
The techniques of natural language processing (NLP) have been
widely applied in machine translation and automated message
understanding, but have only recently been utilized in second
language teaching. This book offers both an argument for and a
critical examination of this new application, with an examination
of how systems may be designed to exploit the power of NLP,
accomodate its limitations, and minimize its risks. This volume
marks the first collection of work in the U.S. and Canada that
incorporates advanced human language technologies into language
tutoring systems, covering languages as diverse as Arabic, Spanish,
Japanese, and English.
The techniques of natural language processing (NLP) have been
widely applied in machine translation and automated message
understanding, but have only recently been utilized in second
language teaching. This book offers both an argument for and a
critical examination of this new application, with an examination
of how systems may be designed to exploit the power of NLP,
accomodate its limitations, and minimize its risks. This volume
marks the first collection of work in the U.S. and Canada that
incorporates advanced human language technologies into language
tutoring systems, covering languages as diverse as Arabic, Spanish,
Japanese, and English.
"A Journey Through Cultures" addresses one of the hottest topics in contemporary HCI: cultural diversity amongst users. For a number of years the HCI community has been investigating alternatives to enhance the design of cross-cultural systems. Most contributions to date have followed either a 'design for each' or a 'design for all' strategy. "A Journey Through Cultures "takes a very different approach. Proponents of CVM - the Cultural Viewpoint Metaphors perspective - the authors invite HCI practitioners to think of how to expose and communicate the idea of cultural diversity. A detailed case study is included which assesses the metaphors' potential in cross-cultural design and evaluation. The results show that cultural viewpoint metaphors have strong epistemic power, leveraged by a combination of theoretic foundations coming from Anthropology, Semiotics and the authors' own work in HCI and Semiotic Engineering. Luciana Salgado, Carla Leitao and Clarisse de Souza are members of SERG, the Semiotic Engineering Research Group at the Departamento de Informatica of Rio de Janeiro's Pontifical Catholic University (PUC-Rio)."
Contemporary corpus linguists use a wide variety of methods to study discourse patterns. This volume provides a systematic comparison of various methodological approaches in corpus linguistics through a series of parallel empirical studies that use a single corpus dataset to answer the same overarching research question. Ten contributing experts each use a different method to address the same broadly framed research question: In what ways does language use in online Q+A forum responses differ across four world English varieties (India, Philippines, United Kingdom, and United States)? Contributions will be based on analysis of the same 400,000 word corpus from online Q+A forums, and contributors employ methodologies including corpus-based discourse analysis, audience perceptions, Multi-Dimensional analysis, pragmatic analysis, and keyword analysis. In their introductory and concluding chapters, the volume editors compare and contrast the findings from each method and assess the degree to which 'triangulating' multiple approaches may provide a more nuanced understanding of a research question, with the aim of identifying a set of complementary approaches which could arguably take into account analytical blind spots. Baker and Egbert also consider the importance of issues such as researcher subjectivity, type of annotation, the limitations and affordances of different corpus tools, the relative strengths of qualitative and quantitative approaches, and the value of considering data or information beyond the corpus. Rather than attempting to find the 'best' approach, the focus of the volume is on how different corpus linguistic methodologies may complement one another, and raises suggestions for further methodological studies which use triangulation to enrich corpus-related research.
This book focuses mainly on logical approaches to computational linguistics, but also discusses integrations with other approaches, presenting both classic and newly emerging theories and applications.Decades of research on theoretical work and practical applications have demonstrated that computational linguistics is a distinctively interdisciplinary area. There is convincing evidence that computational approaches to linguistics can benefit from research on the nature of human language, including from the perspective of its evolution. This book addresses various topics in computational theories of human language, covering grammar, syntax, and semantics. The common thread running through the research presented is the role of computer science, mathematical logic and other subjects of mathematics in computational linguistics and natural language processing (NLP). Promoting intelligent approaches to artificial intelligence (AI) and NLP, the book is intended for researchers and graduate students in the field.
This collection of papers and abstracts stems from the third meeting in the series of Sperlonga workshops on Cognitive Models of Speech Processing. It presents current research on the structure and organization of the mental lexicon, and on the processes that access that lexicon. The volume starts with discussion of issues in acquisition and consideration of questions such as, 'What is the relationship between vocabulary growth and the acquisition of syntax?', and, 'How does prosodic information, concerning the melodies and rhythms of the language, influence the processes of lexical and syntactic acquisition?'. From acquisition, the papers move on to consider the manner in which contemporary models of spoken word recognition and production can map onto neural models of the recognition and production processes. The issue of exactly what is recognised, and when, is dealt with next - the empirical findings suggest that the function of something to which a word refers is accessed with a different time-course to the form of that something. This has considerable implications for the nature, and content, of lexical representations. Equally important are the findings from the studies of disordered lexical processing, and two papers in this volume address the implications of these disorders for models of lexical representation and process (borrowing from both empirical data and computational modelling). The final paper explores whether neural networks can successfully model certain lexical phenomena that have elsewhere been assumed to require rule-based processes.
Multi-Dimensional Analysis: Research Methods and Current Issues provides a comprehensive guide both to the statistical methods in Multi-Dimensional Analysis (MDA) and its key elements, such as corpus building, tagging, and tools. The major goal is to explain the steps involved in the method so that readers may better understand this complex research framework and conduct MD research on their own. Multi-Dimensional Analysis is a method that allows the researcher to describe different registers (textual varieties defined by their social use) such as academic settings, regional discourse, social media, movies, and pop songs. Through multivariate statistical techniques, MDA identifies complementary correlation groupings of dozens of variables, including variables which belong both to the grammatical and semantic domains. Such groupings are then associated with situational variables of texts like information density, orality, and narrativity to determine linguistic constructs known as dimensions of variation, which provide a scale for the comparison of a large number of texts and registers. This book is a comprehensive research guide to MDA.
The Language of ICT: * explores the nature of the electronic word and presents the new types of text in which it is found * examines the impact of the rapid technological change we are living through * analyses different texts, including email and answerphone messages, webpages, faxes, computer games and articles about IT * provides detailed guidance on downloading material from the web, gives URLs to visit, and includes a dedicated webpage * includes a comprehensive glossary of terms.
This book is a description of some of the most recent advances in text classification as part of a concerted effort to achieve computer understanding of human language. In particular, it addresses state-of-the-art developments in the computation of higher-level linguistic features, ranging from etymology to grammar and syntax for the practical task of text classification according to genres, registers and subject domains. Serving as a bridge between computational methods and sophisticated linguistic analysis, this book will be of particular interest to academics and students of computational linguistics as well as professionals in natural language engineering.
This study analyzes passive sentences in English and Portuguese which result from a post-semantic transformation applied when a nound, which does not play the semantic role of actor, is chosen as syntactic subject. Choice between a passive and its non-passive or active counterpart reflects differences in the distribution of information in the sentence as regards the relative importance of the latter's constituents for communication. Such distribution is analyzed in terms of Praque school theory, especially that involving the notions of communicative dynamism and the distribution of theme and rheme. The book concludes with a contrastive analysis of English and Portuguese passive sentence patterns which serves as the basis for observations on the teaching of Portuguese passives to native speakers of English.
Now in its second edition, this volume provides an up to date, accessible, yet authoritative introduction to feedback on second language writing for upper undergraduate and postgraduate students, teachers and researchers in TESOL, applied linguistics, composition studies and English for academic purposes (EAP). Chapters written by leading experts emphasise the potential that feedback has for helping to create a supportive teaching environment, for conveying and modelling ideas about good writing, for developing the ways students talk about writing, and for mediating the relationship between students' wider cultural and social worlds and their growing familiarity with new literacy practices. In addition to updated chapters from the first edition, this edition includes new chapters which focus on new and developing areas of feedback research including student engagement and participation with feedback, the links between SLA and feedback research, automated computer feedback and the use by students of internet resources and social media as feedback resources.
Stress and accent are central, organizing features of grammar, but their precise nature continues to be a source of mystery and wonder. These issues come to the forefront in acquisition, where the tension between the abstract mental representations and the concrete physical manifestations of stress and accent is deeply reflected. Understanding the nature of the representations of stress and accent patterns, and understanding how stress and accent patterns are learned, informs all aspects of linguistic theory and language acquisition. These two themes - representation and acquisition - form the organizational backbone of this book. Each is addressed along different dimensions of stress and accent, including the position of an accent or stress within various prosodic domains and the acoustic dimensions along which the pronunciation of stress and accent may vary. The research presented in the book is multidisciplinary, encompassing theoretical linguistics, speech science, and computational and experimental research.
The research described in this book shows that conversation analysis can effectively model dialogue. Specifically, this work shows that the multidisciplinary field of communicative ICALL may greatly benefit from including Conversation Analysis. As a consequence, this research makes several contributions to the related research disciplines, such as conversation analysis, second-language acquisition, computer-mediated communication, artificial intelligence, and dialogue systems. The book will be of value for researchers and engineers in the areas of computational linguistics, intelligent assistants, and conversational interfaces.
Semantic fields are lexically coherent - the words they contain co-occur in texts. In this book the authors introduce and define semantic domains, a computational model for lexical semantics inspired by the theory of semantic fields. Semantic domains allow us to exploit domain features for texts, terms and concepts, and they can significantly boost the performance of natural-language processing systems. Semantic domains can be derived from existing lexical resources or can be acquired from corpora in an unsupervised manner. They also have the property of interlinguality, and they can be used to relate terms in different languages in multilingual application scenarios. The authors give a comprehensive explanation of the computational model, with detailed chapters on semantic domains, domain models, and applications of the technique in text categorization, word sense disambiguation, and cross-language text categorization. This book is suitable for researchers and graduate students in computational linguistics.
"The Yearbook of Corpus Linguistics and Pragmatics" addresses the interface between the two disciplines and offers a platform to scholars who combine both methodologies to present rigorous and interdisciplinary findings about language in real use. Corpus linguistics and Pragmatics have traditionally represented two paths of scientific thought, parallel but often mutually exclusive and excluding. Corpus Linguistics can offer a meticulous methodology based on mathematics and statistics, while Pragmatics is characterized by its effort in the interpretation of intended meaning in real language. This series will give readers insight into how pragmatics can be used to explain real corpus data and also, how corpora can illustrate pragmatic intuitions. The present volume, "Yearbook of Corpus Linguistics and Pragmatics 2014: New Empirical and Theoretical Paradigms in Corpus Pragmatics, " proposes innovative research models in the liaison between pragmatics and corpus linguistics to explain language in current cultural and social contexts. |
You may like...
Spelling and Writing Words - Theoretical…
Cyril Perret, Thierry Olive
Hardcover
R3,400
Discovery Miles 34 000
The Temporal Structure of Multimodal…
Laszlo Hunyadi, Istvan Szekrenyes
Hardcover
R2,653
Discovery Miles 26 530
Linguistic Inquiries into Donald…
Ulrike Schneider, Matthias Eitelmann
Hardcover
R4,314
Discovery Miles 43 140
Artificial Intelligence for Healthcare…
Boris Galitsky, Saveli Goldberg
Paperback
R2,991
Discovery Miles 29 910
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,569
Discovery Miles 45 690
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, …
Paperback
R2,570
Discovery Miles 25 700
|