![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
The idea that the expression of radical beliefs is a predictor to future acts of political violence has been a central tenet of counter-extremism over the last two decades. Not only has this imposed a duty upon doctors, lecturers and teachers to inform on the radical beliefs of their patients and students but, as this book argues, it is also a fundamentally flawed concept. Informed by his own experience with the UK's Prevent programme while teaching in a Muslim community, Rob Faure Walker explores the linguistic emergence of 'extremism' in political discourse and the potentially damaging generative effect of this language. Taking a new approach which combines critical discourse analysis with critical realism, this book shows how the fear of being labelled as an 'extremist' has resulted in counter-terrorism strategies which actually undermine moderating mechanisms in a democracy. Analysing the generative mechanisms by which the language of counter-extremism might actually promote violence, Faure Walker explains how understanding the potentially oppressive properties of language can help us transcend them. The result is an imminent critique of the most pernicious aspects of the global War on Terror, those that are embedded in our everyday language and political discourse. Drawing on the author's own successful lobbying activities against counter-extremism, this book presents a model for how discourse analysis and critical realism can and should engage with the political and how this will affect meaningful change.
This book constitutes the proceedings of the 17th China National Conference on Computational Linguistics, CCL 2018, and the 6th International Symposium on Natural Language Processing Based on Naturally Annotated Big Data, NLP-NABD 2018, held in Changsha, China, in October 2018. The 33 full papers presented in this volume were carefully reviewed and selected from 84 submissions. They are organized in topical sections named: Semantics; machine translation; knowledge graph and information extraction; linguistic resource annotation and evaluation; information retrieval and question answering; text classification and summarization; social computing and sentiment analysis; and NLP applications.
The two-volume set LNCS 10761 + 10762 constitutes revised selected papers from the CICLing 2017 conference which took place in Budapest, Hungary, in April 2017. The total of 90 papers presented in the two volumes was carefully reviewed and selected from numerous submissions. In addition, the proceedings contain 4 invited papers. The papers are organized in the following topical sections: Part I: general; morphology and text segmentation; syntax and parsing; word sense disambiguation; reference and coreference resolution; named entity recognition; semantics and text similarity; information extraction; speech recognition; applications to linguistics and the humanities. Part II: sentiment analysis; opinion mining; author profiling and authorship attribution; social network analysis; machine translation; text summarization; information retrieval and text classification; practical applications.
This book presents techniques for audio search, aimed to retrieve information from massive speech databases by using audio query words. The authors examine different features, techniques and evaluation measures attempted by researchers around the world. The topics covered also include available databases, software / tools, patents / copyrights, and different platforms for benchmarking. The content is relevant for developers, academics, and students.
Explanations for sound change have traditionally focused on identifying the inception of change, that is, the identification of perturbations of the speech signal, conditioned by physiological constraints on articulatory and/or auditory mechanisms, which affect the way speech sounds are analyzed by the listener. While this emphasis on identifying the nature of intrinsic variation in speech has provided important insights into the origins of widely attested cross-linguistic sound changes, the nature of phonologization - the transition from intrinsic phonetic variation to extrinsic phonological encoding - remains largely unexplored. This volume showcases the current state of the art in phonologization research, bringing together work by leading scholars in sound change research from different disciplinary and scholarly traditions. The authors investigate the progression of sound change from the perspectives of speech perception, speech production, phonology, sociolinguistics, language acquisition, psycholinguistics, computer science, statistics, and social and cognitive psychology. The book highlights the fruitfulness of collaborative efforts among phonologists and specialists from neighbouring disciplines in seeking unified theoretical explanations for the origins of sound patterns in language, as well as improved syntheses of synchronic and diachronic phonology.
This book provides an overview of a recent and flexible approach to speech synthesis design to develop the first statistical parametric speech synthesizer for Ibibio, a West African tonal language. The design precludes the inflexibility encountered when modeling tonal features of the language and can be used for other tonal African languages. Mobile use and technological innovations in developing African nations have exploded. With mobile technology, many of the barriers caused by infrastructure issues have vanished. In order to address issues that are unique to African tonal languages, the book uses Ibibio as a model. The text reviews the language's speech characteristics, required for building the front end components of the design and propose a finite state transducer (FST), useful for modelling the language's tonetactics. The statistical parametric approach discussed in the text, implements the Hidden Markov Model (HMM) technique, with the goal of creating a generic structure that learns the model from the text itself, and uses the data-driven approach to input specification.
This book features contributions to the XVIIth International Conference "Linguistic and Cultural Studies: Traditions and Innovations" (LKTI 2017), providing insights into theory, research, scientific achievements, and best practices in the fields of pedagogics, linguistics, and language teaching and learning with a particular focus on Siberian perspectives and collaborations between academics from other Russian regions. Covering topics including curriculum development, designing and delivering courses and vocational training, the book is intended for academics working at all levels of education striving to improve educational environments in their context - school, tertiary education and continuous professional development.
Stress and accent are central, organizing features of grammar, but their precise nature continues to be a source of mystery and wonder. These issues come to the forefront in acquisition, where the tension between the abstract mental representations and the concrete physical manifestations of stress and accent is deeply reflected. Understanding the nature of the representations of stress and accent patterns, and understanding how stress and accent patterns are learned, informs all aspects of linguistic theory and language acquisition. These two themes - representation and acquisition - form the organizational backbone of this book. Each is addressed along different dimensions of stress and accent, including the position of an accent or stress within various prosodic domains and the acoustic dimensions along which the pronunciation of stress and accent may vary. The research presented in the book is multidisciplinary, encompassing theoretical linguistics, speech science, and computational and experimental research.
This book constitutes the refereed proceedings of the 13th International Conference on Computational Processing of the Portuguese Language, PROPOR 2018, held in Canela, RS, Brazil, in September 2018. The 42 full papers, 3 short papers and 4 other papers presented in this volume were carefully reviewed and selected from 92 submissions. The papers are organized in topical sections named: Corpus Linguistics, Information Extraction, LanguageApplications, Language Resources, Sentiment Analysis and Opinion Mining, Speech Processing, and Syntax and Parsing.
This book constitutes the refereed proceedings of the 15th International Conference of the Pacific Association for Computational Linguistics, PACLING 2017, held in Yangon, Myanmar, in August 2017. The 28 revised full papers presented were carefully reviewed and selected from 50 submissions. The papers are organized in topical sections on semantics and semantic analysis; statistical machine translation; corpora and corpus-based language processing; syntax and syntactic analysis; document classification; information extraction and text mining; text summarization; text and message understanding; automatic speech recognition; spoken language and dialogue; speech pathology; speech analysis.
This book constitutes the thoroughly refereed post-workshop proceedings of the 18th Chinese Lexical Semantics Workshop, CLSW 2017, held in Leshan, China, in May 2017. The 48 full papers and 5 short papers included in this volume were carefully reviewed and selected from 176 submissions. They are organized in the following topical sections: lexical semantics; applications of natural language processing; lexical resources; and corpus linguistics.
This book constitutes the refereed proceedings of the 11th International Conference, NooJ 2017, held in Kenitra and Rabat, Morocco, in May 2017. The 20 revised full papers presented in this volume were carefully reviewed and selected from 56 submissions. NooJ is a linguistic development environment that provides tools for linguists to construct linguistic resources that formalize a large gamut of linguistic phenomena: typography, orthography, lexicons for simple words, multiword units and discontinuous expressions, inflectional and derivational morphology, local, structural and transformational syntax, and semantics. The papers in this volume are organized in topical sections on vocabulary and morphology; syntactic analysis; natural language processing applications; NooJ's future.
The aim of this book is to advocate and promote network models of linguistic systems that are both based on thorough mathematical models and substantiated in terms of linguistics. In this way, the book contributes first steps towards establishing a statistical network theory as a theoretical basis of linguistic network analysis the boarder of the natural sciences and the humanities. This book addresses researchers who want to get familiar with theoretical developments, computational models and their empirical evaluation in the field of complex linguistic networks. It is intended to all those who are interested in statistical models of linguistic systems from the point of view of network research. This includes all relevant areas of linguistics ranging from phonological, morphological and lexical networks on the one hand and syntactic, semantic and pragmatic networks on the other. In this sense, the volume concerns readers from many disciplines such as physics, linguistics, computer science and information science. It may also be of interest for the upcoming area of systems biology with which the chapters collected here share the view on systems from the point of view of network analysis.
This book introduces formal semantics techniques for a natural language processing audience. Methods discussed involve: (i) the denotational techniques used in model-theoretic semantics, which make it possible to determine whether a linguistic expression is true or false with respect to some model of the way things happen to be; and (ii) stages of interpretation, i.e., ways to arrive at meanings by evaluating and converting source linguistic expressions, possibly with respect to contexts, into output (logical) forms that could be used with (i). The book demonstrates that the methods allow wide coverage without compromising the quality of semantic analysis. Access to unrestricted, robust and accurate semantic analysis is widely regarded as an essential component for improving natural language processing tasks, such as: recognizing textual entailment, information extraction, summarization, automatic reply, and machine translation.
The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice’s contributions to pragmatics or in interpretation by abduction.
This book offers the first detailed, comprehensible scientific presentation of Confabulation Theory, addressing a pressing scientific question: How does brain information processing, or cognition, work? With only elementary mathematics as a prerequisite, this book will prove accessible to technologists, scientists, and the educated public.
Explores the direct relation of modern CALL (Computer-Assisted Language Learning) to aspects of natural language processing for theoretical and practical applications, and worldwide demand for formal language education and training that focuses on restricted or specialized professional domains. Unique in its broad-based, state-of-the-art, coverage of current knowledge and research in the interrelated fields of computer-based learning and teaching and processing of specialized linguistic domains. The articles in this book offer insights on or analyses of the current state and future directions of many recent key concepts regarding the application of computers to natural languages, such as: authenticity, personalization, normalization, evaluation. Other articles present fundamental research on major techniques, strategies and methodologies that are currently the focus of international language research projects, both of a theoretical and an applied nature.
The volume brings together papers emerging from the GlobE conference (University of Warsaw). The authors explore major topics in Discourse Studies, offering insights into the field's theoretical foundations and discussing the results of its empirical applications. The book integrates different lines of research in Discourse Studies as undertaken at academic centres Europe-wide and beyond. In this diversity, the editors identify certain dominant lines of study, including (new) media discourse, political discourse in the age of social/digital media, or professional discourse in globalized workplace contexts. At the same time, the volume shows that Discourse Studies not only investigate emerging language phenomena, but also critically reassess research issues formerly addressed.
​This book is an excellent introduction to multiword expressions. It provides a unique, comprehensive and up-to-date overview of this exciting topic in computational linguistics. The first part describes the diversity and richness of multiword expressions, including many examples in several languages. These constructions are not only complex and arbitrary, but also much more frequent than one would guess, making them a real nightmare for natural language processing applications. The second part introduces a new generic framework for automatic acquisition of multiword expressions from texts. Furthermore, it describes the accompanying free software tool, the mwetoolkit, which comes in handy when looking for expressions in texts (regardless of the language). Evaluation is greatly emphasized, underlining the fact that results depend on parameters like corpus size, language, MWE type, etc. The last part contains solid experimental results and evaluates the mwetoolkit, demonstrating its usefulness for computer-assisted lexicography and machine translation. This is the first book to cover the whole pipeline of multiword expression acquisition in a single volume. It is addresses the needs of students and researchers in computational and theoretical linguistics, cognitive sciences, artificial intelligence and computer science. Its good balance between computational and linguistic views make it the perfect starting point for anyone interested in multiword expressions, language and text processing in general.
This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.
The book collects contributions from well-established researchers at the interface between language and cognition. It provides an overview of the latest insights into this interdisciplinary field from the perspectives of natural language processing, computer science, psycholinguistics and cognitive science. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: Lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: Lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject.
To date, the relation between multilingualism and the Semantic Web has not yet received enough attention in the research community. One major challenge for the Semantic Web community is to develop architectures, frameworks and systems that can help in overcoming national and language barriers, facilitating equal access to information produced in different cultures and languages. As such, this volume aims at documenting the state-of-the-art with regard to the vision of a Multilingual Semantic Web, in which semantic information will be accessible in and across multiple languages. The Multilingual Semantic Web as envisioned in this volume will support the following functionalities: (1) responding to information needs in any language with regard to semantically structured data available on the Semantic Web and Linked Open Data (LOD) cloud, (2) verbalizing and accessing semantically structured data, ontologies or other conceptualizations in multiple languages, (3) harmonizing, integrating, aggregating, comparing and repurposing semantically structured data across languages and (4) aligning and reconciling ontologies or other conceptualizations across languages. The volume is divided into three main sections: Principles, Methods and Applications. The section on "Principles" discusses models, architectures and methodologies that enrich the current Semantic Web architecture with features necessary to handle multiple languages. The section on "Methods" describes algorithms and approaches for solving key issues related to the construction of the Multilingual Semantic Web. The section on "Applications" describes the use of Multilingual Semantic Web based approaches in the context of several application domains. This volume is essential reading for all academic and industrial researchers who want to embark on this new research field at the intersection of various research topics, including the Semantic Web, Linked Data, natural language processing, computational linguistics, terminology and information retrieval. It will also be of great interest to practitioners who are interested in re-examining their existing infrastructure and methodologies for handling multiple languages in Web applications or information retrieval systems.
The ubiquity of mobile devices has opened the way to extending learning environments far beyond the constraints of the traditional foreign language classroom. This book seeks to advance the knowledge about effective learning and teaching of English for Medical Purposes supported by mobile environments. The author investigates the effectiveness of the use of a mobile version of a flashcard spaced-repetition learning platform. In conclusion, she presents core principles of an educational solution that supports the ongoing and situated learning of English for Medical Purposes by designing a mobile spaced-repetition medical vocabulary tutor ("Mobile Medical English Companion"). |
You may like...
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, …
Paperback
R2,570
Discovery Miles 25 700
Trends in E-Tools and Resources for…
Gloria Corpas Pastor, Isabel Duran Munoz
Hardcover
R3,527
Discovery Miles 35 270
Linguistic Inquiries into Donald…
Ulrike Schneider, Matthias Eitelmann
Hardcover
R4,314
Discovery Miles 43 140
The Natural Language for Artificial…
Dioneia Motta Monte-Serrat, Carlo Cattani
Paperback
R2,767
Discovery Miles 27 670
|