![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book provides linguists with a clear, critical, and comprehensive overview of theoretical and experimental work on information structure. Leading researchers survey the main theories of information structure in syntax, phonology, and semantics as well as perspectives from psycholinguistics and other relevant fields. Following the editors' introduction the book is divided into four parts. The first, on theories of and theoretical perspectives on information structure, includes chapters on focus, topic, and givenness. Part 2 covers a range of current issues in the field, including quantification, dislocation, and intonation, while Part 3 is concerned with experimental approaches to information structure, including language processing and acquisition. The final part contains a series of linguistic case studies drawn from a wide variety of the world's language families. This volume will be the standard guide to current work in information structure and a major point of departure for future research.
Argumentation mining is an application of natural language processing (NLP) that emerged a few years ago and has recently enjoyed considerable popularity, as demonstrated by a series of international workshops and by a rising number of publications at the major conferences and journals of the field. Its goals are to identify argumentation in text or dialogue; to construct representations of the constellation of claims, supporting and attacking moves (in different levels of detail); and to characterize the patterns of reasoning that appear to license the argumentation. Furthermore, recent work also addresses the difficult tasks of evaluating the persuasiveness and quality of arguments. Some of the linguistic genres that are being studied include legal text, student essays, political discourse and debate, newspaper editorials, scientific writing, and others. The book starts with a discussion of the linguistic perspective, characteristics of argumentative language, and their relationship to certain other notions such as subjectivity. Besides the connection to linguistics, argumentation has for a long time been a topic in Artificial Intelligence, where the focus is on devising adequate representations and reasoning formalisms that capture the properties of argumentative exchange. It is generally very difficult to connect the two realms of reasoning and text analysis, but we are convinced that it should be attempted in the long term, and therefore we also touch upon some fundamentals of reasoning approaches. Then the book turns to its focus, the computational side of mining argumentation in text. We first introduce a number of annotated corpora that have been used in the research. From the NLP perspective, argumentation mining shares subtasks with research fields such as subjectivity and sentiment analysis, semantic relation extraction, and discourse parsing. Therefore, many technical approaches are being borrowed from those (and other) fields. We break argumentation mining into a series of subtasks, starting with the preparatory steps of classifying text as argumentative (or not) and segmenting it into elementary units. Then, central steps are the automatic identification of claims, and finding statements that support or oppose the claim. For certain applications, it is also of interest to compute a full structure of an argumentative constellation of statements. Next, we discuss a few steps that try to 'dig deeper': to infer the underlying reasoning pattern for a textual argument, to reconstruct unstated premises (so-called 'enthymemes'), and to evaluate the quality of the argumentation. We also take a brief look at 'the other side' of mining, i.e., the generation or synthesis of argumentative text. The book finishes with a summary of the argumentation mining tasks, a sketch of potential applications, and a--necessarily subjective--outlook for the field.
Recent developments in artificial intelligence, especially neural network and deep learning technology, have led to rapidly improving performance in voice assistants such as Siri and Alexa. Over the next few years, capability will continue to improve and become increasingly personalised. Today's voice assistants will evolve into virtual personal assistants firmly embedded within our everyday lives. Told through the view of a fictitious personal assistant called Cyba, this book provides an accessible but detailed overview of how a conversational voice assistant works, especially how it understands spoken language, manages conversations, answers questions and generates responses. Cyba explains through examples and diagrams the neural network technology underlying speech recognition and synthesis, natural language understanding, knowledge representation, conversation management, language translation and chatbot technology. Cyba also explores the implications of this rapidly evolving technology for security, privacy and bias, and gives a glimpse of future developments. Cyba's website can be found at HeyCyba.com.
The topic of this book is the theoretical foundations of a theory LSLT -- Lexical Semantic Language Theory - and its implementation in a the system for text analysis and understanding called GETARUN, developed at the University of Venice, Laboratory of Computational Linguistics, Department of Language Sciences. LSLT encompasses a psycholinguistic theory of the way the language faculty works, a grammatical theory of the way in which sentences are analysed and generated -- for this we will be using Lexical-Functional Grammar -- a semantic theory of the way in which meaning is encoded and expressed in utterances -- for this we will be using Situation Semantics -, and a parsing theory of the way in which components of the theory interact in a common architecture to produce the needed language representation to be eventually spoken aloud or interpreted by the phonetic/acoustic language interface. LSLT will then be put to use to show how discourse relations are mapped automatically from text using the tools available in the 4 sub-theories, and in particular we will focus on Causal Relations showing how the various sub-theories contribute to address different types of causality.
This book provides a computational re-evaluation of the genealogical relations between the early Germanic families and of their diversification from their most recent common ancestor, Proto-Germanic. It also proposes a novel computational approach to the problem of linguistic diversification more broadly, using agent-based simulation of speech communities over time. This new method is presented alongside more traditional phylogenetic inference, and the respective results are compared and evaluated. Frederik Hartmann demonstrates that the traditional and novel methods each capture different aspects of this highly complex real-world process; crucially, the new computational approach proposed here offers a new way of investigating the wave-like properties of language relatedness that were previously less accessible. As well as validating the findings of earlier research, the results of this study also generate new insights and shed light on much-debated issues in the field. The conclusion is that the break-up of Germanic should be understood as a gradual disintegration process in which tree-like branching effects are rare.
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
In everyday communication, Europe's citizens, business partners and politicians are inevitably confronted with language barriers. Language technology has the potential to overcome these barriers and to provide innovative interfaces to technologies and knowledge. This document presents a Strategic Research Agenda for Multilingual Europe 2020. The agenda was prepared by META-NET, a European Network of Excellence. META-NET consists of 60 research centres in 34 countries, who cooperate with stakeholders from economy, government agencies, research organisations, non-governmental organisations, language communities and European universities. META-NET's vision is high-quality language technology for all European languages. "The research carried out in the area of language technology is of utmost importance for the consolidation of Portuguese as a language of global communication in the information society." - Dr. Pedro Passos Coelho (Prime-Minister of Portugal) "It is imperative that language technologies for Slovene are developed systematically if we want Slovene to flourish also in the future digital world." - Dr. Danilo Turk (President of the Republic of Slovenia) "For such small languages like Latvian keeping up with the ever increasing pace of time and technological development is crucial. The only way to ensure future existence of our language is to provide its users with equal opportunities as the users of larger languages enjoy. Therefore being on the forefront of modern technologies is our opportunity." - Valdis Dombrovskis (Prime Minister of Latvia) "Europe's inherent multilingualism and our scientific expertise are the perfect prerequisites for significantly advancing the challenge that language technology poses. META-NET opens up new opportunities for the development of ubiquitous multilingual technologies." - Prof. Dr. Annette Schavan (German Minister of Education and Research)
Metadata such as the hashtag is an important dimension of social media communication. Despite its important role in practices such as curating, tagging, and searching content, there has been little research into how meanings are made with social metadata. This book considers how hashtags have expanded their reach from an information-locating resource to an interpersonal resource for coordinating social relationships and expressing solidarity, affinity, and affiliation. It adopts a social semiotic perspective to investigate the communicative functions of hashtags in relation to both language and images. This book is a follow up to Zappavigna's 2012 model of ambient affiliation, providing an extended analytical framework for exploring how affiliation occurs, bond by bond, in online discourse. It focuses in particular on the communing function of hashtags in metacommentary and ridicule, using recent Twitter discourse about US President Donald Trump as a case study. It is essential reading for researchers as well as undergraduates studying social media on any academic course.
This book presents a richly illustrated, hands-on discussion of one of the fastest growing fields in linguistics today. The authors address key methodological issues in corpus linguistics, such as collocations, keywords and the categorization of concordance lines. They show how these topics can be explored step-by-step with BNCweb, a user-friendly web-based tool that supports sophisticated analyses of the 100-million-word British National Corpus. Indeed, the BNC and BNCweb have been described by Geoffrey Leech as «an un-paralleled combination of facilities for finding out about the English language of the present day (Foreword). The book contains tasks and exercises, and is suitable for undergraduates, postgraduates and experienced corpus users alike.
This book is the first dedicated to linguistic parsing - the processing of natural language according to the rules of a formal grammar - in the Minimalist Program. While Minimalism has been at the forefront of generative grammar for several decades, it often remains inaccessible to computer scientists and others in adjacent fields. This volume makes connections with standard computational architectures, provides efficient implementations of some fundamental minimalist accounts of syntax, explores implementations of recent theoretical proposals, and explores correlations between posited structures and measures of neural activity during human language comprehension. These studies will appeal to graduate students and researchers in formal syntax, computational linguistics, psycholinguistics, and computer science.
This handbook offers a comprehensive overview of the field of Persian linguistics, discusses its development, and captures critical accounts of cutting edge research within its major subfields, as well as outlining current debates and suggesting productive lines of future research. Leading scholars in the major subfields of Persian linguistics examine a range of topics split into six thematic parts. Following a detailed introduction from the editors, the volume begins by placing Persian in its historical and typological context in Part I. Chapters in Part II examine topics relating to phonetics and phonology, while Part III looks at approaches to and features of Persian syntax. The fourth part of the volume explores morphology and lexicography, as well as the work of the Academy of Persian Language and Literature. Part V, language and people, covers topics such as language contact and teaching Persian as a foreign language, while the final part examines psycho- neuro-, and computational linguistics. The volume will be an essential resource for all scholars with an interest in Persian language and linguistics.
This comprehensive reference work provides an overview of the concepts, methodologies, and applications in computational linguistics and natural language processing (NLP). * Features contributions by the top researchers in the field, reflecting the work that is driving the discipline forward * Includes an introduction to the major theoretical issues in these fields, as well as the central engineering applications that the work has produced * Presents the major developments in an accessible way, explaining the close connection between scientific understanding of the computational properties of natural language and the creation of effective language technologies * Serves as an invaluable state-of-the-art reference source for computational linguists and software engineers developing NLP applications in industrial research and development labs of software companies
This book provides a comprehensive account of the role of recursion in language in two distinct but interconnected ways. First, David J. Lobina examines how recursion applies at different levels within a full description of natural language. Specifically, he identifies and evaluates recursion as: a) a central property of the computational system underlying the faculty of language; b) a possible feature of the derivations yielded by this computational system; c) a global characteristic of the structures generated by the language faculty; and d) a probable factor in the parsing operations employed during the processing of recursive structures. Second, the volume orders these different levels into a tripartite explanatory framework. According to this framework, the investigation of any particular cognitive domain must begin by first outlining what sort of mechanical procedure underlies the relevant capacity (including what sort of structures it generates). Only then, the author argues, can we properly investigate its implementation, both at the level of abstract computations typical of competence-level analyses, and at the level of the real-time processing of behaviour.
This handbook compares the main analytic frameworks and methods of contemporary linguistics. It offers a unique overview of linguistic theory, revealing the common concerns of competing approaches. By showing their current and potential applications it provides the means by which linguists and others can judge what are the most useful models for the task in hand. Distinguished scholars from all over the world explain the rationale and aims of over thirty explanatory approaches to the description, analysis, and understanding of language. Each chapter considers the main goals of the model; the relation it proposes from between lexicon, syntax, semantics, pragmatics, and phonology; the way it defines the interactions between cognition and grammar; what it counts as evidence; and how it explains linguistic change and structure. The Oxford Handbook of Linguistic Analysis offers an indispensable guide for everyone researching any aspect of language including those in linguistics, comparative philology, cognitive science, developmental philology, cognitive science, developmental psychology, computational science, and artificial intelligence. This second edition has been updated to include seven new chapters looking at linguistic units in language acquisition, conversation analysis, neurolinguistics, experimental phonetics, phonological analysis, experimental semantics, and distributional typology.
With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
This book is about a new approach in the field of computational linguistics related to the idea of constructing n-grams in non-linear manner, while the traditional approach consists in using the data from the surface structure of texts, i.e., the linear structure.In this book, we propose and systematize the concept of syntactic n-grams, which allows using syntactic information within the automatic text processing methods related to classification or clustering. It is a very interesting example of application of linguistic information in the automatic (computational) methods. Roughly speaking, the suggestion is to follow syntactic trees and construct n-grams based on paths in these trees. There are several types of non-linear n-grams; future work should determine, which types of n-grams are more useful in which natural language processing (NLP) tasks. This book is intended for specialists in the field of computational linguistics. However, we made an effort to explain in a clear manner how to use n-grams; we provide a large number of examples, and therefore we believe that the book is also useful for graduate students who already have some previous background in the field.
This book is an advanced introduction to semantics that presents this crucial component of human language through the lens of the 'Meaning-Text' theory - an approach that treats linguistic knowledge as a huge inventory of correspondences between thought and speech. Formally, semantics is viewed as an organized set of rules that connect a representation of meaning (Semantic Representation) to a representation of the sentence (Deep-Syntactic Representation). The approach is particularly interesting for computer assisted language learning, natural language processing and computational lexicography, as our linguistic rules easily lend themselves to formalization and computer applications. The model combines abstract theoretical constructions with numerous linguistic descriptions, as well as multiple practice exercises that provide a solid hands-on approach to learning how to describe natural language semantics.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This is a down-to-earth, 'how to do it' textbook on the making of
dictionaries. Written by professional lexicographers with over
seventy years' experience between them, the book presents a
step-by-step course for the training of lexicographers in all
settings, including publishing houses, colleges, and universities
world-wide, and for the teaching of lexicography as an academic
discipline. It takes readers through the processes of designing,
collecting, and annotating a corpus of texts; shows how to analyse
the data in order to extract the relevant information; and
demonstrates how these findings are drawn together in the semantic,
grammatical, and pedagogic components that make up an entry. The
authors explain the relevance and application of recent linguistic
theories, such as prototype theory and frame semantics, and
describe the role of software in the manipulation of data and the
compilation of entries. They provide practical exercises at every
stage.
This important contribution to the Minimalist Program offers a
comprehensive theory of locality and new insights into phrase
structure and syntactic cartography. It unifies central components
of the grammar and increases the symmetry in syntax. Its central
hypothesis has broad empirical application and at the same time
reinforces the central premise of minimalism that language is an
optimal system.
Dynamical Grammar explores the consequences for language acquisition, language evolution, and linguistic theory of taking the underlying architecture of the language faculty to be that of a complex adaptive dynamical system. It contains the first results of a new and complex model of language acquisition which the authors have developed to measure how far language input is reflected in language output and thereby get a better idea of just how far the human language faculty is hard-wired.
This textbook approaches second language acquisition from the perspective of generative linguistics. Roumyana Slabakova reviews and discusses paradigms and findings from the last thirty years of research in the field, focussing in particular on how the second or additional language is represented in the mind and how it is used in communication. The adoption and analysis of a specific model of acquisition, the Bottleneck Hypothesis, provides a unifying perspective. The book assumes some non-technical knowledge of linguistics, but important concepts are clearly introduced and defined throughout, making it a valuable resource not only for undergraduate and graduate students of linguistics, but also for researchers in cognitive science and language teachers.
In this book, application-related studies for acoustic biomedical sensors are covered in depth. The book features an array of different biomedical signals, including acoustic biomedical signals as well as the thermal biomedical signals, magnetic biomedical signals, and optical biomedical signals to support healthcare. It employs signal processing approaches, such as filtering, Fourier transform, spectral estimation, and wavelet transform. The book presents applications of acoustic biomedical sensors and bio-signal processing for prediction, detection, and monitoring of some diseases from the phonocardiogram (PCG) signal analysis. Several challenges and future perspectives related to the acoustic sensors applications are highlighted. This book supports the engineers, researchers, designers, and physicians in several interdisciplinary domains that support healthcare.
A landmark in linguistics and cognitive science. Ray Jackendoff proposes a new holistic theory of the relation between the sounds, structure, and meaning of language and their relation to mind and brain. Foundations of Language exhibits the most fundamental new thinking in linguistics since Noam Chomsky's Aspects of the Theory of Syntax in 1965 -- yet is readable, stylish, and accessible to a wide readership. Along the way it provides new insights on the evolution of language, thought, and communication.
This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses. * Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for the software developer and providing references for specialized literature in the area * Presents a comprehensive list of freely available, high quality software for several subtasks of IE and for several natural languages * Describes a generic architecture that can learn how to extract information for a given application domain |
You may like...
|