![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book is a convergence of heterogeneous insights (from languages and literature, history, music, media and communications, computer science and information studies) which previously went their separate ways; now unified under a single framework for the purpose of preserving a unique heritage, the language. In a growing society like ours, description and documentation of human and scientific evidence/resources are improving. However, these resources have enjoyed cost-effective solutions for Western languages but are yet to flourish for African tone languages. By situating discussions around a universe of discourse, sufficient to engender cross-border interactions within the African context, this book shall break a dichotomy of challenges on adaptive processes required to unify resources to assist the development of modern solutions for the African domain.
This book presents methods and approaches used to identify the true author of a doubtful document or text excerpt. It provides a broad introduction to all text categorization problems (like authorship attribution, psychological traits of the author, detecting fake news, etc.) grounded in stylistic features. Specifically, machine learning models as valuable tools for verifying hypotheses or revealing significant patterns hidden in datasets are presented in detail. Stylometry is a multi-disciplinary field combining linguistics with both statistics and computer science. The content is divided into three parts. The first, which consists of the first three chapters, offers a general introduction to stylometry, its potential applications and limitations. Further, it introduces the ongoing example used to illustrate the concepts discussed throughout the remainder of the book. The four chapters of the second part are more devoted to computer science with a focus on machine learning models. Their main aim is to explain machine learning models for solving stylometric problems. Several general strategies used to identify, extract, select, and represent stylistic markers are explained. As deep learning represents an active field of research, information on neural network models and word embeddings applied to stylometry is provided, as well as a general introduction to the deep learning approach to solving stylometric questions. In turn, the third part illustrates the application of the previously discussed approaches in real cases: an authorship attribution problem, seeking to discover the secret hand behind the nom de plume Elena Ferrante, an Italian writer known worldwide for her My Brilliant Friend's saga; author profiling in order to identify whether a set of tweets were generated by a bot or a human being and in this second case, whether it is a man or a woman; and an exploration of stylistic variations over time using US political speeches covering a period of ca. 230 years. A solutions-based approach is adopted throughout the book, and explanations are supported by examples written in R. To complement the main content and discussions on stylometric models and techniques, examples and datasets are freely available at the author's Github website.
This book collects research contributions concerning quantitative approaches to characterize originality and universality in language. The target audience comprises researchers and experts in the field but the book may also be beneficial for graduate students. Creativity might be considered as a morphogenetic process combining universal features with originality. While quantitative methods applied to text and music reveal universal features of language and music, originality is a highly appreciated feature of authors, composers, and performers. In this framework, the different methods of traditional problems of authorship attribution and document classification provide important insights on how to quantify the unique features of authors, composers, and styles. Such unique features contrast, and are restricted by, universal signatures, such as scaling laws in word-frequency distribution, entropy measures, long-range correlations, among others. This interplay between innovation and universality is also an essential ingredient of methods for automatic text generation. Innovation in language becomes relevant when it is imitated and spread to other speakers and musicians. Modern digital databases provide new opportunities to characterize and model the creation and evolution of linguistic innovations on historical time scales, a particularly important example of the more general problem of spreading of innovations in complex social systems. This multidisciplinary book combines scientists from various different backgrounds interested in quantitative analysis of variations (synchronic and diachronic) in language and music. The aim is to obtain a deeper understanding of how originality emerges, can be quantified, and propagates.
Computers are essential for the functioning of our society. Despite the incredible power of existing computers, computing technology is progressing beyond today's conventional models. Quantum Computing (QC) is surfacing as a promising disruptive technology. QC is built on the principles of quantum mechanics. QC can run algorithms that are not trivial to run on digital computers. QC systems are being developed for the discovery of new materials and drugs and improved methods for encoding information for secure communication over the Internet. Unprecedented new uses for this technology are bound to emerge from ongoing research. The development of conventional digital computing technology for the arts and humanities has been progressing in tandem with the evolution of computers since the 1950s. Today, computers are absolutely essential for the arts and humanities. Therefore, future developments in QC are most likely to impact on the way in which artists will create and perform, and how research in the humanities will be conducted. This book presents a comprehensive collection of chapters by pioneers of emerging interdisciplinary research at the crossroads of quantum computing, and the arts and humanities, from philosophy and social sciences to visual arts and music. Prof. Eduardo Reck Miranda is a composer and a professor in Computer Music at Plymouth University, UK, where he is a director of the Interdisciplinary Centre for Computer Music Research (ICCMR). His previous publications include the Springer titles Handbook of Artificial Intelligence for Music, Guide to Unconventional Computing for Music, Guide to Brain-Computer Music Interfacing and Guide to Computing for Expressive Music Performance.
Computational Analysis and Understanding of Natural Languages: Principles, Methods and Applications, Volume 38, the latest release in this monograph that provides a cohesive and integrated exposition of these advances and associated applications, includes new chapters on Linguistics: Core Concepts and Principles, Grammars, Open-Source Libraries, Application Frameworks, Workflow Systems, Mathematical Essentials, Probability, Inference and Prediction Methods, Random Processes, Bayesian Methods, Machine Learning, Artificial Neural Networks for Natural Language Processing, Information Retrieval, Language Core Tasks, Language Understanding Applications, and more. The synergistic confluence of linguistics, statistics, big data, and high-performance computing is the underlying force for the recent and dramatic advances in analyzing and understanding natural languages, hence making this series all the more important.
This book applies formal language and automata theory in the context of Tibetan computational linguistics; further, it constructs a Tibetan-spelling formal grammar system that generates a Tibetan-spelling formal language group, and an automata group that can recognize the language group. In addition, it investigates the application technologies of Tibetan-spelling formal language and automata. Given its creative and original approach, the book offers a valuable reference guide for researchers, teachers and graduate students in the field of computational linguistics.
This is the very first book to investigate the field of phraseology from a learner corpus perspective, bringing together studies at the cutting edge of corpus-based research into phraseology and language learners. The chapters include learner-corpus-based studies of phraseological units in varieties of learner language differentiated in terms of task and/or learner variables, compared with each other or with one or more reference corpora; mixed-methods studies that combine learner corpus data with more experimental data types (e.g. eyetracking); and instruction-oriented studies that show how learner-corpus-based insights can be used to inform second language (L2) teaching and testing. The detailed analysis of a wide range of multiword units (collocations, lexical bundles, lexico-grammatical patterns) and extensive learner corpus data provide the reader with a comprehensive theoretical, methodological and applied perspective onto L2 use in a wide range of situations. The knowledge gained from these learner corpus studies has major implications for L2 theory and practice and will help to inform pedagogical assessment and practice.
Innovative examination of augmentation technologies in terms of technical, social, and ethical considerations Usable as a supplemental text for a variety of courses, and also of interest to researchers and professionals in fields including: technical communication, digital communication, UX design, information technology, informatics, human factors, artificial intelligence, ethics, philosophy of technology, and sociology of technology First major work to combine technological, ethical, social, and rhetorical perspectives on human augmentation Additional cases and research material available at the authors' Fabric of Digital Life research database at https://fabricofdigitallife.com/
The book covers theoretical work, approaches, applications, and techniques for computational models of information, language, and reasoning. Computational and technological developments that incorporate natural language are proliferating. Adequate coverage of natural language processing in artificial intelligence encounters problems on developments of specialized computational approaches and algorithms. Many difficulties are due to ambiguities in natural language and dependency of interpretations on contexts and agents. Classical approaches proceed with relevant updates, and new developments emerge in theories of formal and natural languages, computational models of information and reasoning, and related computerized applications. Its focus is on computational processing of human language and relevant medium languages, which can be theoretically formal, or for programming and specification of computational systems. The goal is to promote intelligent natural language processing, along with models of computation, language, reasoning, and other cognitive processes.
This book brings together selected revised papers representing a multidisciplinary approach to language, music, and gesture, as well as their interaction. Among the number of multidisciplinary and comparative studies of the structure and organization of language and music, the presented book broadens the scope with the inclusion of gesture problems in the analyzed spectrum. A unique feature of the presented collection is that the papers, compiled in one volume, allow readers to see similarities and differences in gesture as an element of non-verbal communication and gesture as the main element of dance. In addition to enhancing the analysis, the data on the perception and comprehension of speech, music, and dance in regard to both their functioning in a natural situation and their reflection in various forms of performing arts makes this collection extremely useful for those who are interested in human cognitive abilities and performing skills. The book begins with a philosophical overview of recent neurophysiological studies reflecting the complexity of higher cognitive functions, which references the idea of the baroque style in art being neither linear nor stable. The following papers are allocated into 5 sections. The papers of the section "Language-Music-Gesture As Semiotic Systems" discuss the issues of symbolic and semiotic aspects of language, music, and gesture, including from the perspective of their notation. This is followed by the issues of "Language-Music-Gesture Onstage" and interaction within the idea of the "World as a Text." The papers of "Teaching Language and Music" present new teaching methods that take into account the interaction of all the cognitive systems examined. The papers of the last two sections focus on issues related primarily to language: The section "Verbalization Of Music And Gesture" considers the problem of describing musical text and non-verbal behavior with language, and papers in the final section "Emotions In Linguistics And Ai-Communication Systems" analyze the ways of expressing emotions in speech and the problems of organizing emotional communication with computer agents.
*The most comprehensive up-to-date student-friendly guide to translation tools and technologies *Translation Tools and Technologies are an essential component of any translator training programme, following European Masters in Translation framework guidelines *Unlike the competition, this textbook offers comprehensive and accessible explanations of how to use current translation tools, illustrated by examples using a wide range of languages, linked to task-oriented, self-study training materials
*The most comprehensive up-to-date student-friendly guide to translation tools and technologies *Translation Tools and Technologies are an essential component of any translator training programme, following European Masters in Translation framework guidelines *Unlike the competition, this textbook offers comprehensive and accessible explanations of how to use current translation tools, illustrated by examples using a wide range of languages, linked to task-oriented, self-study training materials
Computational semantics is the art and science of computing meaning in natural language. The meaning of a sentence is derived from the meanings of the individual words in it, and this process can be made so precise that it can be implemented on a computer. Designed for students of linguistics, computer science, logic and philosophy, this comprehensive text shows how to compute meaning using the functional programming language Haskell. It deals with both denotational meaning (where meaning comes from knowing the conditions of truth in situations), and operational meaning (where meaning is an instruction for performing cognitive action). Including a discussion of recent developments in logic, it will be invaluable to linguistics students wanting to apply logic to their studies, logic students wishing to learn how their subject can be applied to linguistics, and functional programmers interested in natural language processing as a new application area.
This book presents a method of linking the ordered structure of the cosmos with human thoughts: the theory of language holography. In the view presented here, the cosmos is in harmony with the human body and language, and human thoughts are holographic with the cosmos at the level of language. In a word, the holographic relation is nothing more than the bridge by means of which Guanlian Qian connects the cosmos, human, and language. This is a vitally important contribution to linguistic and philosophical studies that cannot be ignored. The book has two main focus areas: outer language holography and inner language holography. These two areas constitute the core of the dynamic and holistic view put forward in the theory of language holography. The book's main properties can be summarized into the following points: First and foremost, it is a book created in toto by a Chinese scholar devoted to pragmatics, theoretical linguistics, and philosophy of language. Secondly, the book was accepted by a top Chinese publisher and was republished the second year, which reflected its value and appeal. Thirdly, in terms of writing style, the book is characterized by succinctness and logic. As a result, it reads fluidly and smoothly without redundancies, which is not that common in linguistic or even philosophical works. Lastly, as stated by the author in the introduction, "Creation is the development of previous capacities, but it is also the generation of new ones"; this book can be said to put this concept into practice. Overall, the book offers a unique resource to readers around the world who want to know more about the truly original and innovative studies of language in Chinese academia.
As natural language processing spans many different disciplines, it is sometimes difficult to understand the contributions and the challenges that each of them presents. This book explores the special relationship between natural language processing and cognitive science, and the contribution of computer science to these two fields. It is based on the recent research papers submitted at the international workshops of Natural Language and Cognitive Science (NLPCS) which was launched in 2004 in an effort to bring together natural language researchers, computer scientists, and cognitive and linguistic scientists to collaborate together and advance research in natural language processing. The chapters cover areas related to language understanding, language generation, word association, word sense disambiguation, word predictability, text production and authorship attribution. This book will be relevant to students and researchers interested in the interdisciplinary nature of language processing.
The relation between ontologies and language is currently at the forefront of natural language processing (NLP). Ontologies, as widely used models in semantic technologies, have much in common with the lexicon. A lexicon organizes words as a conventional inventory of concepts, while an ontology formalizes concepts and their logical relations. A shared lexicon is the prerequisite for knowledge-sharing through language, and a shared ontology is the prerequisite for knowledge-sharing through information technology. In building models of language, computational linguists must be able to accurately map the relations between words and the concepts that they can be linked to. This book focuses on the technology involved in enabling integration between lexical resources and semantic technologies. It will be of interest to researchers and graduate students in NLP, computational linguistics, and knowledge engineering, as well as in semantics, psycholinguistics, lexicology and morphology/syntax.
Complex systems in nature and society make use of information for the development of their internal organization and the control of their functional mechanisms. Alongside technical aspects of storing, transmitting and processing information, the various semantic aspects of information, such as meaning, sense, reference and function, play a decisive part in the analysis of such systems. With the aim of fostering a better understanding of semantic systems from an evolutionary and multidisciplinary perspective, this volume collects contributions by philosophers and natural scientists, linguists, information and computer scientists. They do not follow a single research paradigm; rather they shed, in a complementary way, new light upon some of the most important aspects of the evolution of semantic systems. Evolution of Semantic Systems is intended for researchers in philosophy, computer science, and the natural sciences who work on the analysis or development of semantic systems, ontologies, or similar complex information structures. In the eleven chapters, they will find a broad discussion of topics ranging from underlying universal principles to representation and processing aspects to paradigmatic examples.
This book brings together work on Turkish natural language and speech processing over the last 25 years, covering numerous fundamental tasks ranging from morphological processing and language modeling, to full-fledged deep parsing and machine translation, as well as computational resources developed along the way to enable most of this work. Owing to its complex morphology and free constituent order, Turkish has proved to be a fascinating language for natural language and speech processing research and applications. After an overview of the aspects of Turkish that make it challenging for natural language and speech processing tasks, this book discusses in detail the main tasks and applications of Turkish natural language and speech processing. A compendium of the work on Turkish natural language and speech processing, it is a valuable reference for new researchers considering computational work on Turkish, as well as a one-stop resource for commercial and research institutions planning to develop applications for Turkish. It also serves as a blueprint for similar work on other Turkic languages such as Azeri, Turkmen and Uzbek.
The papers compiled in the present volume reflect the key theme of the most recent Duo Colloquium sessions – contextuality. The psychological notion of context has been central to translation research for decades, and it has evolved along with the development of translational thought, translation types and tools. The theme of contextuality can be understood at any level, from the geopolitical to the textual, and embraced by both academic and professional considerations of translational and interpreting phenomena. It is centred on context, contexts and/or decontextualisation in translation and interpreting theory and practice from a variety of disciplinary, interdisciplinary and transdisciplinary perspectives. Discussing the above-mentioned notions is the subject of the present volume.
This volume showcases original, agenda-setting studies in the field of learner corpus research of both spoken and written production. The studies have important applications for classroom pedagogy. The volume brings readers up-to-date with new written and spoken learner corpora, often looking at previously under-examined variables in learner corpus investigations. It also demonstrates innovative applications of learner corpus findings, addressing issues such as the effect of task, the effect of learner variables and the nature of learner language. The volume is of significant interest to researchers working in corpus linguistics, learner corpus research, second language acquisition and English for Academic and Specific Purposes, as well to practitioners interested in the application of the findings in language teaching and assessment.
Recent decades of studies have been human-centred while zooming in on cognition, verbal choices and performance. (...) [and] have provided interesting results, but which often veer towards quantity rather than quality findings. The new reality, however, requires new directions that move towards a humanism that is rooted in holism, stressing that a living organism needs to refocus in order to see the self as a part of a vast ecosystem. Dr Izabela Dixon, Koszalin University of Technology, Poland This volume is a collection of eight chapters by different authors focusing on ecolinguistics. It is preceded by a preface (..) underlin[ing] the presence of ecolinguistics as a newly-born linguistic theory and practice, something that explains the mosaic of content and method in the various chapters, with a more coherent approach being the aim for future research. Prof. Harald Ulland, Bergen University, Norway
This book is the first linguistic study that combines CL and CDA to compare the media representations of Macau's gaming industry in English-language newspapers published in Mainland China, Hong Kong, and Macau. An analytical framework based on the notion of the extended units of meaning of a lexical item (Sinclair, 2004) is adopted to examine the ideological stances regarding Macau's gaming industry among three English-language newspapers published in the three Chinese territories mentioned above by comparing the patterns of co-selection of shared and unique words and phraseologies. The book's findings confirm that the news media in these three territories differ in their ideological stances. Moreover, the book offers readers a fresh perspective on Macau by exploring how the region and its gaming industry are represented in three news article corpora. Thus, it provides unique insights into the similarities and differences among these three territories. Further, the research suggests that the methods adopted in this book can be replicated to examine and compare the news and political discourses in a variety of contexts. Accordingly, the book represents a valuable resource not only for students majoring in linguistics, media studies, communication, journalism, etc., but also for researchers in the fields of corpus linguistics, critical discourse analysis, etc.
This concise volume offers an accessible introduction to state-of-the-art artificial intelligence (AI) language models, providing a platform for their use in textual interpretation across the humanities and social sciences. The book outlines the affordances of new technologies for textual analysis, which has historically employed established approaches within the humanities. Neuman, Danesi, and Vilenchik argue that these different forms of analysis are indeed complementary, demonstrating the ways in which AI-based perspectives echo similar theoretical and methodological currents in traditional approaches while also offering new directions for research. The volume showcases examples from a wide range of texts, including novels, television shows, and films to illustrate the ways in which the latest AI technologies can be used for "dialoguing" with textual characters and examining textual meaning coherence. Illuminating the potential of AI language models to both enhance and extend research on the interpretation of texts, this book will appeal to scholars interested in cognitive approaches to the humanities in such fields as literary studies, discourse analysis, media studies, film studies, psychology, and artificial intelligence.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms. |
You may like...
Learning Religion - Anthropological…
David Berliner, Ramon Sarro
Paperback
R838
Discovery Miles 8 380
Zion on the Hudson - Dutch New York and…
Firth Haring Fabend
Hardcover
R1,664
Discovery Miles 16 640
Ordinary Lives and Grand Schemes - An…
Samuli Schielke, Liza Debevec
Hardcover
R2,836
Discovery Miles 28 360
The Laity as Participants in the Mission…
Humphrey C. Anameje
Hardcover
R1,024
Discovery Miles 10 240
Gods in America - Religious Pluralism in…
Charles L. Cohen, Ronald L. Numbers
Hardcover
R3,850
Discovery Miles 38 500
Community and Worldview among Paraiyars…
Anderson H.M. Jeremiah
Hardcover
R4,632
Discovery Miles 46 320
|