![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
What led Shakespeare to write his most cryptic poem, 'The Phoenix and Turtle'? Could the Phoenix represent Queen Elizabeth, on the verge of death as Shakespeare wrote? Is the Earl of Essex, recently executed for treason, the Turtledove lover of the Phoenix? Questions such as these dominate scholarship of both Shakespeare's poem and the book in which it first appeared: Robert Chester's enigmatic collection of verse, Love's Martyr (1601), where Shakespeare's allegory sits next to erotic love lyrics by Ben Jonson, George Chapman and John Marston, as well as work by the much lesser-known Chester. Don Rodrigues critiques and revises traditional computational attribution studies by integrating the insights of queer theory to a study of Love's Martyr. A book deeply engaged in current debates in computational literary studies, it is particularly attuned to questions of non-normativity, deviation and departures from style when assessing stylistic patterns. Gathering insights from decades of computational and traditional analyses, it presents, most radically, data that supports the once-outlandish theory that Shakespeare may have had a significant hand in editing works signed by Chester. At the same time, this book insists on the fundamentally collaborative nature of production in Love's Martyr. Developing a compelling account of how collaborative textual production could work among early modern writers, Shakespeare's Queer Analytics is a much-needed methodological intervention in computational attribution studies. It articulates what Rodrigues describes as 'queer analytics': an approach to literary analysis that joins the non-normative close reading of queer theory to the distant attention of computational literary studies - highlighting patterns that traditional readings often overlook or ignore.
The book features recent attempts to construct corpora for specific purposes - e.g. multifactorial Dutch (parallel), Geasy Easy Language Corpus (intralingual), HK LegCo interpreting corpus - and showcases sophisticated and innovative corpus analysis methods. It proposes new approaches to address classical themes - i.e. translation pedagogy, translation norms and equivalence, principles of translation - and brings interdisciplinary perspectives - e.g. contrastive linguistics, cognition and metaphor studies - to cast new light. It is a timely reference for the researchers as well as postgraduate students who are interested in the applications of corpus technology to solving translation and interpreting problems.
This book is about machine translation (MT) and the classic problems associated with this language technology. It examines the causes of these problems and, for linguistic, rule-based systems, attributes the cause to language's ambiguity and complexity and their interplay in logic-driven processes. For non-linguistic, data-driven systems, the book attributes translation shortcomings to the very lack of linguistics. It then proposes a demonstrable way to relieve these drawbacks in the shape of a working translation model (Logos Model) that has taken its inspiration from key assumptions about psycholinguistic and neurolinguistic function. The book suggests that this brain-based mechanism is effective precisely because it bridges both linguistically driven and data-driven methodologies. It shows how simulation of this cerebral mechanism has freed this one MT model from the all-important, classic problem of complexity when coping with the ambiguities of language. Logos Model accomplishes this by a data-driven process that does not sacrifice linguistic knowledge, but that, like the brain, integrates linguistics within a data-driven process. As a consequence, the book suggests that the brain-like mechanism embedded in this model has the potential to contribute to further advances in machine translation in all its technological instantiations.
The aim of this book is to advocate and promote network models of linguistic systems that are both based on thorough mathematical models and substantiated in terms of linguistics. In this way, the book contributes first steps towards establishing a statistical network theory as a theoretical basis of linguistic network analysis the boarder of the natural sciences and the humanities. This book addresses researchers who want to get familiar with theoretical developments, computational models and their empirical evaluation in the field of complex linguistic networks. It is intended to all those who are interested in statistical models of linguistic systems from the point of view of network research. This includes all relevant areas of linguistics ranging from phonological, morphological and lexical networks on the one hand and syntactic, semantic and pragmatic networks on the other. In this sense, the volume concerns readers from many disciplines such as physics, linguistics, computer science and information science. It may also be of interest for the upcoming area of systems biology with which the chapters collected here share the view on systems from the point of view of network analysis.
This book encompasses a collection of topics covering recent advances that are important to the Arabic language in areas of natural language processing, speech and image analysis. This book presents state-of-the-art reviews and fundamentals as well as applications and recent innovations.The book chapters by top researchers present basic concepts and challenges for the Arabic language in linguistic processing, handwritten recognition, document analysis, text classification and speech processing. In addition, it reports on selected applications in sentiment analysis, annotation, text summarization, speech and font analysis, word recognition and spotting and question answering.Moreover, it highlights and introduces some novel applications in vital areas for the Arabic language. The book is therefore a useful resource for young researchers who are interested in the Arabic language and are still developing their fundamentals and skills in this area. It is also interesting for scientists who wish to keep track of the most recent research directions and advances in this area.
This book deals with two fundamental issues in the semiotics of the image. The first is the relationship between image and observer: how does one look at an image? To answer this question, this book sets out to transpose the theory of enunciation formulated in linguistics over to the visual field. It also aims to clarify the gains made in contemporary visual semiotics relative to the semiology of Roland Barthes and Emile Benveniste. The second issue addressed is the relation between the forces, forms and materiality of the images. How do different physical mediums (pictorial, photographic and digital) influence visual forms? How does materiality affect the generativity of forms? On the forces within the images, the book addresses the philosophical thought of Gilles Deleuze and Rene Thom as well as the experiment of Aby Warburg's Atlas Mnemosyne. The theories discussed in the book are tested on a variety of corpora for analysis, including both paintings and photographs, taken from traditional as well as contemporary sources in a variety of social sectors (arts and sciences). Finally, semiotic methodology is contrasted with the computational analysis of large collections of images (Big Data), such as the "Media Visualization" analyses proposed by Lev Manovich and Cultural Analytics in the field of Computer Science to evaluate the impact of automatic analysis of visual forms on Digital Art History and more generally on the image sciences.
This readable introductory textbook presents a concise survey of corpus linguistics. The first section of the book introduces the key concepts in corpus linguistics and provides a brief history of the discipline. The second section expands the study of language and shows how corpus linguistics can advance our study of words and meaning, the benefits of studying the corpora, and how meaning can best be conceptualised. Explaining corpus linguistics in easy to understand terms, and including a glossary and suggestions for further reading, this book will be useful to students trying to get a grasp on this subject.
This book presents the concept of the double hierarchy linguistic term set and its extensions, which can deal with dynamic and complex decision-making problems. With the rapid development of science and technology and the acceleration of information updating, the complexity of decision-making problems has become increasingly obvious. This book provides a comprehensive and systematic introduction to the latest research in the field, including measurement methods, consistency methods, group consensus and large-scale group consensus decision-making methods, as well as their practical applications. Intended for engineers, technicians, and researchers in the fields of computer linguistics, operations research, information science, management science and engineering, it also serves as a textbook for postgraduate and senior undergraduate university students.
This book investigates various aspects of Computer Assisted Language Learning (CALL) that address the challenges arising due to increasing learner and teacher mobility. The chapters deal with two broad areas, i.e. mobile technology for teacher and translator education and technology for mobile language learning. The authors allow for insights into how mobile learning activities can be used in educational settings by providing research on classroom practice. This book aims at helping readers gain a better understanding of the function and implementation of mobile technologies in local classroom contexts to support mobility, professional development, and language and culture learning.
This groundbreaking book offers a new and compelling perspective on the structure of human language. The fundamental issue it addresses is the proper balance between syntax and semantics, between structure and derivation, and between rule systems and lexicon. It argues that the balance struck by mainstream generative grammar is wrong. It puts forward a new basis for syntactic theory, drawing on a wide range of frameworks, and charts new directions for research. In the past four decades, theories of syntactic structure have become more abstract, and syntactic derivations have become ever more complex. Peter Culicover and Ray Jackendoff trace this development through the history of contemporary syntactic theory, showing how much it has been driven by theory-internal rather than empirical considerations. They develop an alternative that is responsive to linguistic, cognitive, computational, and biological concerns. Simpler Syntax is addressed to linguists of all persuasions. It will also be of central interest to those concerned with language in psychology, human biology, evolution, computational science, and artificial intelligence.
Addresses a central problem in cognitive science, concerning the learning procedures through which humans acquire and represent natural language. Brings together world leading scholars from a range of disciplines, includingcomputational linguistics, psychology, behavioural science, and mathematical linguistics. Will appeal to researchers in computational and mathematical linguistics, psychology and behavioral science, AI and NLP. Represents a wide spectrum of perspectives
This book presents recent advances by leading researchers in computational modelling of language acquisition. The contributors have been drawn from departments of linguistics, cognitive science, psychology, and computer science. They show what light can be thrown on fundamental problems when powerful computational techniques are combined with real data. The book considers the extent to which linguistic structure is readily available in the environment, the degree to which language learning is inductive or deductive, and the power of different modelling formalisms for different problems and approaches. It will appeal to linguists, psychologists, cognitive scientists working in language acquisition,and to those involved in computational modelling in linguistic and behavioural science.
As natural language processing spans many different disciplines, it is sometimes difficult to understand the contributions and the challenges that each of them presents. This book explores the special relationship between natural language processing and cognitive science, and the contribution of computer science to these two fields. It is based on the recent research papers submitted at the international workshops of Natural Language and Cognitive Science (NLPCS) which was launched in 2004 in an effort to bring together natural language researchers, computer scientists, and cognitive and linguistic scientists to collaborate together and advance research in natural language processing. The chapters cover areas related to language understanding, language generation, word association, word sense disambiguation, word predictability, text production and authorship attribution. This book will be relevant to students and researchers interested in the interdisciplinary nature of language processing.
This book investigates the language of Polish-English bilingual children raised in the United Kingdom and their Polish monolingual counterparts. It exemplifies the lexico-grammatical knowledge of both groups and uses corpus-based grammatical inference in order to establish the source of the impediment of the minority language of the bilingual group. The author applies the methodology of corpus linguistics and narrative analysis to study the language of young bilinguals. He presupposes the caveat that a child-type competence exists and can be contrasted with an adult-type competence. He uses a variety of corpus frequency measures to compare the specific stylometric features of bilingual child narratives and their monolingual counterparts. The book focuses on how bilingual and monolingual language differs in areas such as the lexicon, morphosyntax, and semantics.
This volume explores multiple dimensions of openness in ICT-enhanced education. The chapters, contributed by researchers and academic teachers, present a number of exemplary solutions in the area. They involve the use of open source software, innovative technologies, teaching/learning methods and techniques, as well as examine potential benefits for both teachers' and students' cognitive, behavioural and metacognitive development.
Users of natural languages have many word orders with which to encode the same truth-conditional meaning. They choose contextually appropriate strings from these many ways with little conscious effort and with effective communicative results. Previous computational models of when English speakers produce non-canonical word orders, like topicalization, left-dislocation and clefts, fail. The primary goal of this book is to present a better model of when speakers choose to produce certain non-canonical word orders by incorporating the effects of discourse context and speaker goals on syntactic choice. This book makes extensive use of previously unexamined naturally occurring corpus data of non-canonical word order in English, both to illustrate the points of the theoretical model and to train the statistical model.
Contemporary corpus linguists use a wide variety of methods to study discourse patterns. This volume provides a systematic comparison of various methodological approaches in corpus linguistics through a series of parallel empirical studies that use a single corpus dataset to answer the same overarching research question. Ten contributing experts each use a different method to address the same broadly framed research question: In what ways does language use in online Q+A forum responses differ across four world English varieties (India, Philippines, United Kingdom, and United States)? Contributions will be based on analysis of the same 400,000 word corpus from online Q+A forums, and contributors employ methodologies including corpus-based discourse analysis, audience perceptions, Multi-Dimensional analysis, pragmatic analysis, and keyword analysis. In their introductory and concluding chapters, the volume editors compare and contrast the findings from each method and assess the degree to which 'triangulating' multiple approaches may provide a more nuanced understanding of a research question, with the aim of identifying a set of complementary approaches which could arguably take into account analytical blind spots. Baker and Egbert also consider the importance of issues such as researcher subjectivity, type of annotation, the limitations and affordances of different corpus tools, the relative strengths of qualitative and quantitative approaches, and the value of considering data or information beyond the corpus. Rather than attempting to find the 'best' approach, the focus of the volume is on how different corpus linguistic methodologies may complement one another, and raises suggestions for further methodological studies which use triangulation to enrich corpus-related research.
This book contains a selection of articles on new developments in translation and interpreting studies. It offers a wealth of new and innovative approaches to the didactics of translation and interpreting that may well change the way in which translators and interpreters are trained. They include such issues of current debate as assessment methods and criteria, assessment of competences, graduate employability, placements, skills labs, the perceived skills gap between training and profession, the teaching of terminology, and curriculum design. The authors are experts in their fields from renowned universities in Europe, Africa and North-America. The book will be an indispensable help for trainers and researchers, but may also be of interest to translators and interpreters.
The content of this textbook is organized as a theory of language for the construction of talking robots. The main topic is the mechanism of natural language communication in both the speaker and the hearer. In the third edition the author has modernized the text, leaving the overview of traditional, theoretical, and computational linguistics, analytic philosophy of language, and mathematical complexity theory with their historical backgrounds intact. The format of the empirical analyses of English and German syntax and semantics has been adapted to current practice; and Chaps. 22-24 have been rewritten to focus more sharply on the construction of a talking robot.
Handbook of Artificial Intelligence in Biomedical Engineering focuses on recent AI technologies and applications that provide some very promising solutions and enhanced technology in the biomedical field. Recent advancements in computational techniques, such as machine learning, Internet of Things (IoT), and big data, accelerate the deployment of biomedical devices in various healthcare applications. This volume explores how artificial intelligence (AI) can be applied to these expert systems by mimicking the human expert's knowledge in order to predict and monitor the health status in real time. The accuracy of the AI systems is drastically increasing by using machine learning, digitized medical data acquisition, wireless medical data communication, and computing infrastructure AI approaches, helping to solve complex issues in the biomedical industry and playing a vital role in future healthcare applications. The volume takes a multidisciplinary perspective of employing these new applications in biomedical engineering, exploring the combination of engineering principles with biological knowledge that contributes to the development of revolutionary and life-saving concepts.
Inheritance, which has its origins in the field of artificial intelligence, is a framework focusing on shared properties. When applied to inflectional morphology, it enables useful generalizations within and across paradigms. The inheritance tree format serves as an alternative to traditional paradigms and provides a visual representation of the structure of the language's morphology. This mapping also enables cross-linguistic morphological comparison. In this book, the nominal inflectional morphology of Old High German, Latin, Early New High German, and Koine Greek are analyzed using inheritance trees. Morphological data is drawn from parallel texts in each language; the trees may be used as a translation aid to readers of the source texts as an accompaniment to or substitute for traditional paradigms. The trees shed light on the structural similarities and differences among the four languages.
"Empirical Methods in Language Studies" presents 22 papers employing a broad range of empirical methods in the analysis of various aspects of language and communication. The individual texts offer contributions to the description of conceptual strategies, syntax, semantics, non-verbal communication, language learning, discourse, and literature.
This book delineates a range of linguistic features that characterise the reading texts used at the B2 (Independent User) and C1 (Proficient User) levels of the Greek State Certificate of English Language Proficiency exams in order to help define text difficulty per level of competence. In addition, it examines whether specific reader variables influence test takers' perceptions of reading comprehension difficulty. The end product is a Text Classification Profile per level of competence and a formula for automatically estimating text difficulty and assigning levels to texts consistently and reliably in accordance with the purposes of the exam and its candidature-specific characteristics.
The 1990s saw a paradigm change in the use of corpus-driven methods in NLP. In the field of multilingual NLP (such as machine translation and terminology mining) this implied the use of parallel corpora. However, parallel resources are relatively scarce: many more texts are produced daily by native speakers of any given language than translated. This situation resulted in a natural drive towards the use of comparable corpora, i.e. non-parallel texts in the same domain or genre. Nevertheless, this research direction has not produced a single authoritative source suitable for researchers and students coming to the field. The proposed volume provides a reference source, identifying the state of the art in the field as well as future trends. The book is intended for specialists and students in natural language processing, machine translation and computer-assisted translation.
This volume contains papers which reflect current discussions in the study of speech actions. The collection was inspired by the papers presented at Meaning, Context and Cognition, the first international conference integrating cognitive linguistics and pragmatics initiated by the Department of English Language and Applied Linguistics of the University of Lodz (Poland) in 2011 and held annually. The necessarily heterogeneous field of research into speech actions is approached by the contributors from various perspectives and with focus on different types of data. The papers have been grouped into four sections which subsequently emphasise theoretical linguistics issues, lexical pragmatics, speech act-theoretic problems, and cognitive processes. |
You may like...
Eurasian Arctic Land Cover and Land Use…
Garik Gutman, Anni Reissell
Hardcover
R2,701
Discovery Miles 27 010
Statistical Approaches for Landslide…
Sujit Mandal, Subrata Mondal
Hardcover
R2,885
Discovery Miles 28 850
Remote Sensing of African Mountains…
Samuel Adelabu, Abel Ramoelo, …
Hardcover
R2,675
Discovery Miles 26 750
Community Ecology of Tropical Birds
E.A. Jayson, C. Sivaperuman
Hardcover
R2,768
Discovery Miles 27 680
|