![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book presents a collection of papers on the issue of focus in its broadest sense. While commonly being considered as related to phenomena such as presupposition and anaphora, focusing is much more widely spread, and it is this pervasiveness that this collection addresses. The volume explicitly aims to bring together theoretical, psychological, and descriptive approaches to focus, at the same time maintaining the overall interest in how these notions apply to the larger problem of evolving some formal representation of the semantic aspects of linguistic content. The contributed papers to this volume have been reworked from a selection of original work presented at a conference held in 1994 in Schloss Wolfsbrunnen in Germany.
This book provides linguists with a clear, critical, and comprehensive overview of theoretical and experimental work on information structure. Leading researchers survey the main theories of information structure in syntax, phonology, and semantics as well as perspectives from psycholinguistics and other relevant fields. Following the editors' introduction the book is divided into four parts. The first, on theories of and theoretical perspectives on information structure, includes chapters on focus, topic, and givenness. Part 2 covers a range of current issues in the field, including quantification, dislocation, and intonation, while Part 3 is concerned with experimental approaches to information structure, including language processing and acquisition. The final part contains a series of linguistic case studies drawn from a wide variety of the world's language families. This volume will be the standard guide to current work in information structure and a major point of departure for future research.
Originally published in 1997, this book is concerned with human language technology. This technology provides computers with the capability to handle spoken and written language. One major goal is to improve communication between humans and machines. If people can use their own language to access information, working with software applications and controlling machinery, the greatest obstacle for the acceptance of new information technology is overcome. Another important goal is to facilitate communication among people. Machines can help to translate texts or spoken input from one human language to the other. Programs that assist people in writing by checking orthography, grammar and style are constantly improving. This book was sponsored by the Directorate General XIII of the European Union and the Information Science and Engineering Directorate of the National Science Foundation, USA.
Argumentation mining is an application of natural language processing (NLP) that emerged a few years ago and has recently enjoyed considerable popularity, as demonstrated by a series of international workshops and by a rising number of publications at the major conferences and journals of the field. Its goals are to identify argumentation in text or dialogue; to construct representations of the constellation of claims, supporting and attacking moves (in different levels of detail); and to characterize the patterns of reasoning that appear to license the argumentation. Furthermore, recent work also addresses the difficult tasks of evaluating the persuasiveness and quality of arguments. Some of the linguistic genres that are being studied include legal text, student essays, political discourse and debate, newspaper editorials, scientific writing, and others. The book starts with a discussion of the linguistic perspective, characteristics of argumentative language, and their relationship to certain other notions such as subjectivity. Besides the connection to linguistics, argumentation has for a long time been a topic in Artificial Intelligence, where the focus is on devising adequate representations and reasoning formalisms that capture the properties of argumentative exchange. It is generally very difficult to connect the two realms of reasoning and text analysis, but we are convinced that it should be attempted in the long term, and therefore we also touch upon some fundamentals of reasoning approaches. Then the book turns to its focus, the computational side of mining argumentation in text. We first introduce a number of annotated corpora that have been used in the research. From the NLP perspective, argumentation mining shares subtasks with research fields such as subjectivity and sentiment analysis, semantic relation extraction, and discourse parsing. Therefore, many technical approaches are being borrowed from those (and other) fields. We break argumentation mining into a series of subtasks, starting with the preparatory steps of classifying text as argumentative (or not) and segmenting it into elementary units. Then, central steps are the automatic identification of claims, and finding statements that support or oppose the claim. For certain applications, it is also of interest to compute a full structure of an argumentative constellation of statements. Next, we discuss a few steps that try to 'dig deeper': to infer the underlying reasoning pattern for a textual argument, to reconstruct unstated premises (so-called 'enthymemes'), and to evaluate the quality of the argumentation. We also take a brief look at 'the other side' of mining, i.e., the generation or synthesis of argumentative text. The book finishes with a summary of the argumentation mining tasks, a sketch of potential applications, and a--necessarily subjective--outlook for the field.
This 1992 collection takes the exciting step of examining natural language phenomena from the perspective of both computational linguistics and formal semantics. Computational linguistics has until now been primarily concerned with the construction of computational models for handling the complexities of linguistic form, but has not tackled the questions of representing or computing meaning. Formal semantics, on the other hand, has attempted to account for the relations between forms and meanings, without necessarily attending to computational concerns. The book introduces the reader to the two disciplines and considers the prospects for the more unified and comprehensive computational theory of language which might obtain from their amalgamation. Of great interest to those working in the fields of computation, logic, semantics, artificial intelligence and linguistics generally.
This 1992 collection takes the exciting step of examining natural language phenomena from the perspective of both computational linguistics and formal semantics. Computational linguistics has until now been primarily concerned with the construction of computational models for handling the complexities of linguistic form, but has not tackled the questions of representing or computing meaning. Formal semantics, on the other hand, has attempted to account for the relations between forms and meanings, without necessarily attending to computational concerns. The book introduces the reader to the two disciplines and considers the prospects for the more unified and comprehensive computational theory of language which might obtain from their amalgamation. Of great interest to those working in the fields of computation, logic, semantics, artificial intelligence and linguistics generally.
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computational linguistics, artificial intelligence and the philosophy of language will want to read this book.
A primary problem in the area of natural language processing has been that of semantic analysis. This book aims to look at the semantics of natural languages in context. It presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics. This results in distinct advantages for relating the semantic analysis of a sentence to its context. A key feature is the clear separation of the lexical entries that represent the domain-specific linguistic information from the semantic interpreter that performs the analysis. The criteria for defining the lexical entries are firmly grounded in current linguistic theories, facilitating integration with existing parsers. This approach has been implemented and tested in Prolog on a domain for physics word problems and full details of the algorithms and code are presented. Semantic Processing for Finite Domains will appeal to postgraduates and researchers in computational linguistics, and to industrial groups specializing in natural language processing.
The topic of this book is the theoretical foundations of a theory LSLT -- Lexical Semantic Language Theory - and its implementation in a the system for text analysis and understanding called GETARUN, developed at the University of Venice, Laboratory of Computational Linguistics, Department of Language Sciences. LSLT encompasses a psycholinguistic theory of the way the language faculty works, a grammatical theory of the way in which sentences are analysed and generated -- for this we will be using Lexical-Functional Grammar -- a semantic theory of the way in which meaning is encoded and expressed in utterances -- for this we will be using Situation Semantics -, and a parsing theory of the way in which components of the theory interact in a common architecture to produce the needed language representation to be eventually spoken aloud or interpreted by the phonetic/acoustic language interface. LSLT will then be put to use to show how discourse relations are mapped automatically from text using the tools available in the 4 sub-theories, and in particular we will focus on Causal Relations showing how the various sub-theories contribute to address different types of causality.
This book provides a computational re-evaluation of the genealogical relations between the early Germanic families and of their diversification from their most recent common ancestor, Proto-Germanic. It also proposes a novel computational approach to the problem of linguistic diversification more broadly, using agent-based simulation of speech communities over time. This new method is presented alongside more traditional phylogenetic inference, and the respective results are compared and evaluated. Frederik Hartmann demonstrates that the traditional and novel methods each capture different aspects of this highly complex real-world process; crucially, the new computational approach proposed here offers a new way of investigating the wave-like properties of language relatedness that were previously less accessible. As well as validating the findings of earlier research, the results of this study also generate new insights and shed light on much-debated issues in the field. The conclusion is that the break-up of Germanic should be understood as a gradual disintegration process in which tree-like branching effects are rare.
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
In everyday communication, Europe's citizens, business partners and politicians are inevitably confronted with language barriers. Language technology has the potential to overcome these barriers and to provide innovative interfaces to technologies and knowledge. This document presents a Strategic Research Agenda for Multilingual Europe 2020. The agenda was prepared by META-NET, a European Network of Excellence. META-NET consists of 60 research centres in 34 countries, who cooperate with stakeholders from economy, government agencies, research organisations, non-governmental organisations, language communities and European universities. META-NET's vision is high-quality language technology for all European languages. "The research carried out in the area of language technology is of utmost importance for the consolidation of Portuguese as a language of global communication in the information society." - Dr. Pedro Passos Coelho (Prime-Minister of Portugal) "It is imperative that language technologies for Slovene are developed systematically if we want Slovene to flourish also in the future digital world." - Dr. Danilo Turk (President of the Republic of Slovenia) "For such small languages like Latvian keeping up with the ever increasing pace of time and technological development is crucial. The only way to ensure future existence of our language is to provide its users with equal opportunities as the users of larger languages enjoy. Therefore being on the forefront of modern technologies is our opportunity." - Valdis Dombrovskis (Prime Minister of Latvia) "Europe's inherent multilingualism and our scientific expertise are the perfect prerequisites for significantly advancing the challenge that language technology poses. META-NET opens up new opportunities for the development of ubiquitous multilingual technologies." - Prof. Dr. Annette Schavan (German Minister of Education and Research)
Metadata such as the hashtag is an important dimension of social media communication. Despite its important role in practices such as curating, tagging, and searching content, there has been little research into how meanings are made with social metadata. This book considers how hashtags have expanded their reach from an information-locating resource to an interpersonal resource for coordinating social relationships and expressing solidarity, affinity, and affiliation. It adopts a social semiotic perspective to investigate the communicative functions of hashtags in relation to both language and images. This book is a follow up to Zappavigna's 2012 model of ambient affiliation, providing an extended analytical framework for exploring how affiliation occurs, bond by bond, in online discourse. It focuses in particular on the communing function of hashtags in metacommentary and ridicule, using recent Twitter discourse about US President Donald Trump as a case study. It is essential reading for researchers as well as undergraduates studying social media on any academic course.
The use of literature in second language teaching has been advocated for a number of years, yet despite this there have only been a limited number of studies which have sought to investigate its effects. Fewer still have focused on its potential effects as a model of spoken language or as a vehicle to develop speaking skills. Drawing upon multiple research studies, this volume fills that gap to explore how literature is used to develop speaking skills in second language learners. The volume is divided into two sections: literature and spoken language and literature and speaking skills. The first section focuses on studies exploring the use of literature to raise awareness of spoken language features, whilst the second investigates its potential as a vehicle to develop speaking skills. Each section contains studies with different designs and in various contexts including China, Japan and the UK. The research designs used mean that the chapters contain clear implications for classroom pedagogy and research in different contexts.
This book is the first dedicated to linguistic parsing - the processing of natural language according to the rules of a formal grammar - in the Minimalist Program. While Minimalism has been at the forefront of generative grammar for several decades, it often remains inaccessible to computer scientists and others in adjacent fields. This volume makes connections with standard computational architectures, provides efficient implementations of some fundamental minimalist accounts of syntax, explores implementations of recent theoretical proposals, and explores correlations between posited structures and measures of neural activity during human language comprehension. These studies will appeal to graduate students and researchers in formal syntax, computational linguistics, psycholinguistics, and computer science.
When we speak, we configure the vocal tract which shapes the visible motions of the face and the patterning of the audible speech acoustics. Similarly, we use these visible and audible behaviors to perceive speech. This book showcases a broad range of research investigating how these two types of signals are used in spoken communication, how they interact, and how they can be used to enhance the realistic synthesis and recognition of audible and visible speech. The volume begins by addressing two important questions about human audiovisual performance: how auditory and visual signals combine to access the mental lexicon and where in the brain this and related processes take place. It then turns to the production and perception of multimodal speech and how structures are coordinated within and across the two modalities. Finally, the book presents overviews and recent developments in machine-based speech recognition and synthesis of AV speech.
This handbook offers a comprehensive overview of the field of Persian linguistics, discusses its development, and captures critical accounts of cutting edge research within its major subfields, as well as outlining current debates and suggesting productive lines of future research. Leading scholars in the major subfields of Persian linguistics examine a range of topics split into six thematic parts. Following a detailed introduction from the editors, the volume begins by placing Persian in its historical and typological context in Part I. Chapters in Part II examine topics relating to phonetics and phonology, while Part III looks at approaches to and features of Persian syntax. The fourth part of the volume explores morphology and lexicography, as well as the work of the Academy of Persian Language and Literature. Part V, language and people, covers topics such as language contact and teaching Persian as a foreign language, while the final part examines psycho- neuro-, and computational linguistics. The volume will be an essential resource for all scholars with an interest in Persian language and linguistics.
This handbook compares the main analytic frameworks and methods of contemporary linguistics. It offers a unique overview of linguistic theory, revealing the common concerns of competing approaches. By showing their current and potential applications it provides the means by which linguists and others can judge what are the most useful models for the task in hand. Distinguished scholars from all over the world explain the rationale and aims of over thirty explanatory approaches to the description, analysis, and understanding of language. Each chapter considers the main goals of the model; the relation it proposes from between lexicon, syntax, semantics, pragmatics, and phonology; the way it defines the interactions between cognition and grammar; what it counts as evidence; and how it explains linguistic change and structure. The Oxford Handbook of Linguistic Analysis offers an indispensable guide for everyone researching any aspect of language including those in linguistics, comparative philology, cognitive science, developmental philology, cognitive science, developmental psychology, computational science, and artificial intelligence. This second edition has been updated to include seven new chapters looking at linguistic units in language acquisition, conversation analysis, neurolinguistics, experimental phonetics, phonological analysis, experimental semantics, and distributional typology.
This book is an advanced introduction to semantics that presents this crucial component of human language through the lens of the 'Meaning-Text' theory - an approach that treats linguistic knowledge as a huge inventory of correspondences between thought and speech. Formally, semantics is viewed as an organized set of rules that connect a representation of meaning (Semantic Representation) to a representation of the sentence (Deep-Syntactic Representation). The approach is particularly interesting for computer assisted language learning, natural language processing and computational lexicography, as our linguistic rules easily lend themselves to formalization and computer applications. The model combines abstract theoretical constructions with numerous linguistic descriptions, as well as multiple practice exercises that provide a solid hands-on approach to learning how to describe natural language semantics.
Language and Computers introduces students to the fundamentals of how computers are used to represent, process, and organize textual and spoken information. Concepts are grounded in real-world examples familiar to students experiences of using language and computers in everyday life. * A real-world introduction to the fundamentals of how computers process language, written specifically for the undergraduate audience, introducing key concepts from computational linguistics. * Offers a comprehensive explanation of the problems computers face in handling natural language * Covers a broad spectrum of language-related applications and issues, including major computer applications involving natural language and the social and ethical implications of these new developments * The book focuses on real-world examples with which students can identify, using these to explore the technology and how it works * Features under-the-hood sections that give greater detail on selected advanced topics, rendering the book appropriate for more advanced courses, or for independent study by the motivated reader.
Experimental syntax is an area that is rapidly growing as linguistic research becomes increasingly focused on replicable language data, in both fieldwork and laboratory environments. The first of its kind, this handbook provides an in-depth overview of current issues and trends in this field, with contributions from leading international scholars. It pays special attention to sentence acceptability experiments, outlining current best practices in conducting tests, and pointing out promising new avenues for future research. Separate sections review research results from the past 20 years, covering specific syntactic phenomena and language types. The handbook also outlines other common psycholinguistic and neurolinguistic methods for studying syntax, comparing and contrasting them with acceptability experiments, and giving useful perspectives on the interplay between theoretical and experimental linguistics. Providing an up-to-date reference on this exciting field, it is essential reading for students and researchers in linguistics interested in using experimental methods to conduct syntactic research.
This book is about a new approach in the field of computational linguistics related to the idea of constructing n-grams in non-linear manner, while the traditional approach consists in using the data from the surface structure of texts, i.e., the linear structure.In this book, we propose and systematize the concept of syntactic n-grams, which allows using syntactic information within the automatic text processing methods related to classification or clustering. It is a very interesting example of application of linguistic information in the automatic (computational) methods. Roughly speaking, the suggestion is to follow syntactic trees and construct n-grams based on paths in these trees. There are several types of non-linear n-grams; future work should determine, which types of n-grams are more useful in which natural language processing (NLP) tasks. This book is intended for specialists in the field of computational linguistics. However, we made an effort to explain in a clear manner how to use n-grams; we provide a large number of examples, and therefore we believe that the book is also useful for graduate students who already have some previous background in the field.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020. |
You may like...
Quantum Transport in Mesoscopic Systems…
Pier A. Mello, Narendra Kumar
Hardcover
R4,489
Discovery Miles 44 890
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
Mathematical and Numerical Foundations…
Tomas Chacon Rebollo, Roger Lewandowski
Hardcover
R4,104
Discovery Miles 41 040
Experimental Aerodynamics - An…
Bruno Chanetz, Jean De Lery, …
Hardcover
R3,677
Discovery Miles 36 770
|