![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
area and in applications to linguistics, formal epistemology, and the study of norms. The second contains papers on non-classical and many-valued logics, with an eye on applications in computer science and through it to engineering. The third concerns the logic of belief management, whichis likewise closely connected with recent work in computer science but also links directly with epistemology, the philosophy of science, the study of legal and other normative systems, and cognitive science. The grouping is of course rough, for there are contributions to the volume that lie astride a boundary; at least one of them is relevant, from a very abstract perspective, to all three areas. We say a few words about each of the individual chapters, to relate them to each other and the general outlook of the volume. Modal Logics The ?rst bundle of papers in this volume contains contribution to modal logic. Three of them examine general problems that arise for all kinds of modal logics. The ?rst paper is essentially semantical in its approach, the second proof-theoretic, the third semantical again: Commutativity of quanti?ers in varying-domain Kripke models, by R. Goldblatt and I. Hodkinson, investigates the possibility of com- tation (i.e. reversing the order) for quanti?ers in ?rst-order modal logics interpreted over relational models with varying domains. The authors study a possible-worlds style structural model theory that does not v- idate commutation, but satis?es all the axioms originally presented by Kripke for his familiar semantics for ?rst-order modal logic."
Computational semantics is the art and science of computing meaning in natural language. The meaning of a sentence is derived from the meanings of the individual words in it, and this process can be made so precise that it can be implemented on a computer. Designed for students of linguistics, computer science, logic and philosophy, this comprehensive text shows how to compute meaning using the functional programming language Haskell. It deals with both denotational meaning (where meaning comes from knowing the conditions of truth in situations), and operational meaning (where meaning is an instruction for performing cognitive action). Including a discussion of recent developments in logic, it will be invaluable to linguistics students wanting to apply logic to their studies, logic students wishing to learn how their subject can be applied to linguistics, and functional programmers interested in natural language processing as a new application area.
With the first publication of this book in 1988, the centrality of the lexicon in language research was becoming increasingly apparent and the use of relational models of the lexicon had been the particular focus of research in a variety of disciplines since the early 1980s. This convergence of approach made the present collection especially welcome for bringing together reports of theoretical developments and applications in relational semantics in computer science, linguistics, cognitive science, anthropology and industrial research. It explains in detail some important applications of relational models to the construction of natural language interfaces, the building of thesauri for bibliographic information retrieval systems and the compilation of terminology banks for machine translation systems. Relational Models of the Lexicon not only provides an invaluable survey of research in relational semantics, but offers a stimulus for potential research advances in semantics, natural language processing and knowledge representation.
This book collects and introduces some of the best and most useful
work in practical lexicography. It has been designed as a resource
for students and scholars of lexicography and lexicology and to be
an essential reference for professional lexicographers. It focusses
on central issues in the field and covers topics hotly debated in
lexicography circles. After a full contextual introduction Thierry
Fontenelle divides the book into twelve parts - theoretical
perspectives, corpus design, lexicographical evidence, word senses
and polysemy, collocations and idioms, definitions, examples,
grammar and usage, bilingual lexicography, tools and methods,
semantic networks, and how dictionaries are used. The book is fully
referenced and indexed.
Memory-based language processing - a machine learning and problem solving method for language technology - is based on the idea that the direct reuse of examples using analogical reasoning is more suited for solving language processing problems than the application of rules extracted from those examples. This book discusses the theory and practice of memory-based language processing, showing its comparative strengths over alternative methods of language modelling. Language is complex, with few generalizations, many sub-regularities and exceptions, and the advantage of memory-based language processing is that it does not abstract away from this valuable low-frequency information. By applying the model to a range of benchmark problems, the authors show that for linguistic areas ranging from phonology to semantics, it produces excellent results. They also describe TiMBL, a software package for memory-based language processing. The first comprehensive overview of the approach, this book will be invaluable for computational linguists, psycholinguists and language engineers.
This groundbreaking book offers a new and compelling perspective on the structure of human language. The fundamental issue it addresses is the proper balance between syntax and semantics, between structure and derivation, and between rule systems and lexicon. It argues that the balance struck by mainstream generative grammar is wrong. It puts forward a new basis for syntactic theory, drawing on a wide range of frameworks, and charts new directions for research. In the past four decades, theories of syntactic structure have become more abstract, and syntactic derivations have become ever more complex. Peter Culicover and Ray Jackendoff trace this development through the history of contemporary syntactic theory, showing how much it has been driven by theory-internal rather than empirical considerations. They develop an alternative that is responsive to linguistic, cognitive, computational, and biological concerns. At the core of this alternative is the Simpler Syntax Hypothesis: the most explanatory syntactic theory is one that imputes the minimum structure necessary to mediate between phonology and meaning. A consequence of this hypothesis is a far richer mapping between syntax and semantics than is generally assumed. Through concrete analyses of numerous grammatical phenomena, some well studied and some new, the authors demonstrate the empirical and conceptual superiority of the Simpler Syntax approach. Simpler Syntax is addressed to linguists of all persuasions. It will also be of central interest to those concerned with language in psychology, human biology, evolution, computational science, and artificial intellige
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
A landmark in linguistics and cognitive science. Ray Jackendoff proposes a new holistic theory of the relation between the sounds, structure, and meaning of language and their relation to mind and brain. Foundations of Language exhibits the most fundamental new thinking in linguistics since Noam Chomsky's Aspects of the Theory of Syntax in 1965 -- yet is readable, stylish, and accessible to a wide readership. Along the way it provides new insights on the evolution of language, thought, and communication.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This volume of newly commissioned essays examines current theoretical and computational work on polysemy, the term used in semantic analysis to describe words with more than one meaning. Such words present few difficulties in everyday language, but pose central problems for linguists and lexicographers, especially for those involved in lexical semantics and in computational modelling. The contributors to this book - leading researchers in theoretical and computational linguistics - consider the implications of these problems for linguistic theory and how they may be addressed by computational means. The theoretical essays in the book examine polysemy as an aspect of a broader theory of word meaning. Three theoretical approaches are presented: the Classical (or Aristotelian), the Prototypical, and the Relational. Their authors describe the nature of polysemy, the criteria for detecting it, and its manifestations across languages. They examine the issues arising from the regularity of polysemy and the theoretical principles proposed to account for the interaction of lexical meaning with the semantics and syntax of the context in which it occurs. Finally they consider the formal representations of meaning in the lexicon, and their implications for dictionary construction. The computational essays are concerned with the challenge of polysemy to automatic sense disambiguation - how the intended meaning for a word occurrence can be identified. The approaches presented include the exploitation of lexical information in machine-readable dictionaries, machine learning based on patterns of word co-occurrence, and hybrid approaches that combine the two. As a whole the volume shows how on the one hand theoretical work provides the motivation and may suggest the basis for computational algorithms, while on the other computational results may validate, or reveal problems in, the principles set forth by theories.
This book is about investigating the way people use language in speech and writing. It introduces the corpus-based approach to the study of language, based on analysis of large databases of real language examples and illustrates exciting new findings about language and the different ways that people speak and write. The book is important both for its step-by-step descriptions of research methods and for its findings about grammar and vocabulary, language use, language learning, and differences in language use across texts and user groups.
Provides a valuable overview to the problems of syntax analysis, semantic analysis, text analysis and natural language generation. Although the text is written for readers with a background in computer science and finite mathematics, advanced knowledge of programming language or linguistics is unnecessary.
This book analyses diverse public discourses to investigate how wealth inequality has been portrayed in the British media from the time of the Second World War to the present day. Using a variety of corpus-assisted methods of discourse analysis, chapters present an historicized perspective on how the mass media have helped to make sharply increased wealth inequality seem perfectly normal. Print, radio and online media sources are interrogated using methodologies grounded in critical discourse analysis, critical stylistics and corpus linguistics in order to examine the influence of the media on the British electorate, who have passively consented to the emergence of an even less egalitarian Britain. Covering topics such as Second World War propaganda, the 'Change4Life' anti-obesity campaign and newspaper, parliamentary and TV news programme attitudes to poverty and austerity, this book will be of value to all those interested in the mass media's contribution to the entrenched inequality in modern Britain.
Recent decades have seen a fundamental change and transformation in the commercialisation and popularisation of sports and sporting events. Corpus Approaches to the Language of Sports uses corpus resources to offer new perspectives on the language and discourse of this increasingly popular and culturally significant area of research. Bringing together a range of empirical studies from leading scholars, this book bridges the gap between quantitative corpus approaches and more qualitative, multimodal discourse methods. Covering a wide range of sports, including football, cycling and basketball, the linguistic aspects of sports language are analysed across different genres and contexts. Highlighting the importance of studying the language of sports alongside its accompanying audio-visual modes of communication, chapters draw on new digitised collections of language to fully describe and understand the complexities of communication through various channels. In doing so, Corpus Approaches to the Language of Sports not only offers exciting new insights into the language of sports but also extends the scope of corpus linguistics beyond traditional monomodal approaches to put multimodality firmly on the agenda.
**Shortlisted for the 2021 BAAL Book Prize for an outstanding book in the field of Applied Linguistics** Situated at the interface of corpus linguistics and health communication, Corpus, Discourse and Mental Health provides insights into the linguistic practices of members of three online support communities as they describe their experiences of living with and managing different mental health problems, including anorexia nervosa, depression and diabulimia. In examining contemporary health communication data, the book combines quantitative corpus linguistic methods with qualitative discourse analysis that draws upon recent theoretical insights from critical health sociology. Using this mixed-methods approach, the analysis identifies patterns and consistencies in the language used by people experiencing psychological distress and their role in realising varying representations of mental illness, diagnosis and treatment. Far from being neutral accounts of suffering and treating illness, corpus analysis illustrates that these interactions are suffused with moral and ideological tensions sufferers seek to collectively negotiate responsibility for the onset and treatment of recalcitrant mental health problems. Integrating corpus linguistics, critical discourse analysis and health sociology, this book showcases the capacity of linguistic analysis for understanding mental health discourse as well as critically exploring the potential of corpus linguistics to offer an evidence-based approach to health communication research.
The idea that the expression of radical beliefs is a predictor to future acts of political violence has been a central tenet of counter-extremism over the last two decades. Not only has this imposed a duty upon doctors, lecturers and teachers to inform on the radical beliefs of their patients and students but, as this book argues, it is also a fundamentally flawed concept. Informed by his own experience with the UK's Prevent programme while teaching in a Muslim community, Rob Faure Walker explores the linguistic emergence of 'extremism' in political discourse and the potentially damaging generative effect of this language. Taking a new approach which combines critical discourse analysis with critical realism, this book shows how the fear of being labelled as an 'extremist' has resulted in counter-terrorism strategies which actually undermine moderating mechanisms in a democracy. Analysing the generative mechanisms by which the language of counter-extremism might actually promote violence, Faure Walker explains how understanding the potentially oppressive properties of language can help us transcend them. The result is an imminent critique of the most pernicious aspects of the global War on Terror, those that are embedded in our everyday language and political discourse. Drawing on the author's own successful lobbying activities against counter-extremism, this book presents a model for how discourse analysis and critical realism can and should engage with the political and how this will affect meaningful change.
Multi-Dimensional Analysis: Research Methods and Current Issues provides a comprehensive guide both to the statistical methods in Multi-Dimensional Analysis (MDA) and its key elements, such as corpus building, tagging, and tools. The major goal is to explain the steps involved in the method so that readers may better understand this complex research framework and conduct MD research on their own. Multi-Dimensional Analysis is a method that allows the researcher to describe different registers (textual varieties defined by their social use) such as academic settings, regional discourse, social media, movies, and pop songs. Through multivariate statistical techniques, MDA identifies complementary correlation groupings of dozens of variables, including variables which belong both to the grammatical and semantic domains. Such groupings are then associated with situational variables of texts like information density, orality, and narrativity to determine linguistic constructs known as dimensions of variation, which provide a scale for the comparison of a large number of texts and registers. This book is a comprehensive research guide to MDA.
Linguistic Issues in Language Technology focuses on the relationships between linguistic insights and language technology. In conjunction with machine learning and statistical techniques, more sophisticated models of language and speech are needed to make significant progress in both existing and newly emerging areas of computational language analysis. The vast quantity of electronically accessible natural language data provides unprecedented opportunities for data-intensive analysis of linguistic phenomena, which can in turn enrich computational methods. Linguistic Issues in Language Technology provides a forum for this work. In this volume, contributors offer new perspectives on semantic representations for textual inference.
This handbook presents an overview of the phenomenon of reference - the ability to refer to and pick out entities - which is an essential part of human language and cognition. In the volume's 21 chapters, international experts in the field offer a critical account of all aspects of reference from a range of theoretical perspectives. Chapters in the first part of the book are concerned with basic questions related to different types of referring expression and their interpretation. They address questions about the role of the speaker - including speaker intentions - and of the addressee, as well as the role played by the semantics of the linguistic forms themselves in establishing reference. This part also explores the nature of such concepts as definite and indefinite reference and specificity, and the conditions under which reference may fail. The second part of the volume looks at implications and applications, with chapters covering such topics as the acquisition of reference by children, the processing of reference both in the human brain and by machines. The volume will be of interest to linguists in a wide range of subfields, including semantics, pragmatics, computational linguistics, and psycho- and neurolinguistics, as well as scholars in related fields such as philosophy and computer science.
This book is open access and available on www.bloomsburycollections.com. It is funded by Knowledge Unlatched. Corpus linguistics has much to offer history, being as both disciplines engage so heavily in analysis of large amounts of textual material. This book demonstrates the opportunities for exploring corpus linguistics as a method in historiography and the humanities and social sciences more generally. Focussing on the topic of prostitution in 17th-century England, it shows how corpus methods can assist in social research, and can be used to deepen our understanding and comprehension. McEnery and Baker draw principally on two sources - the newsbook Mercurius Fumigosis and the Early English Books Online Corpus. This scholarship on prostitution and the sex trade offers insight into the social position of women in history.
A comprehensive corpus analysis of adolescent health communication is long overdue - and this book provides it. We know comparatively little about the language adolescents use to articulate their health concerns, and discourse analysis of their choices can shed light on their attitudes towards and beliefs about health and illness. This book interrogates a two million word corpus of messages posted by adolescents to an online health forum. It adopts a mixed method corpus approach to health communication, combining both quantitative and qualitative techniques. Analysis in this way gives voice to an age group whose subjective experiences of illness have often been marginalized or simply overlooked in favour of the concerns of older populations.
Linguistically annotated corpora are becoming a central part of the corpus linguistics field. One of their main strengths is the level of searchability they offer, but with the annotation come problems of the initial complexity of queries and query tools. This book gives a full, pedagogic account of this burgeoning field.Beginning with an overview of corpus linguistics, its prerequisites and goals, the book then introduces linguistically annotated corpora. It explores the different levels of linguistic annotation, including morphological, parts of speech, syntactic, semantic and discourse-level, as well as advantages and challenges for such annotations. It covers the main annotated corpora for English, the Penn Treebank, the International Corpus of English, and OntoNotes, as well as a wide range of corpora for other languages. In its third part, search strategies required for different types of data are explored. All chapters are accompanied by exercises and by sections on further reading, together with an integral companion website that contains lists and guidance on contemporary annotated corpora and query tools.
Linguistically annotated corpora are becoming a central part of the corpus linguistics field. One of their main strengths is the level of searchability they offer, but with the annotation come problems of the initial complexity of queries and query tools. This book gives a full, pedagogic account of this burgeoning field.Beginning with an overview of corpus linguistics, its prerequisites and goals, the book then introduces linguistically annotated corpora. It explores the different levels of linguistic annotation, including morphological, parts of speech, syntactic, semantic and discourse-level, as well as advantages and challenges for such annotations. It covers the main annotated corpora for English, the Penn Treebank, the International Corpus of English, and OntoNotes, as well as a wide range of corpora for other languages. In its third part, search strategies required for different types of data are explored. All chapters are accompanied by exercises and by sections on further reading, together with an integral companion website that contains lists and guidance on contemporary annotated corpora and query tools. |
![]() ![]() You may like...
Genetic Programming for Production…
Fangfang Zhang, Su Nguyen, …
Hardcover
R4,254
Discovery Miles 42 540
Responsible AI - Implementing Ethical…
Sray Agarwal, Shashin Mishra
Hardcover
R2,631
Discovery Miles 26 310
Machine Learning for Planetary Science
Joern Helbert, Mario D'Amore, …
Paperback
R3,590
Discovery Miles 35 900
First Measurement of the Running of the…
Matteo M. Defranchis
Hardcover
R4,346
Discovery Miles 43 460
The Practice of Statistics for Business…
David S Moore, George P. McCabe, …
Mixed media product
R2,551
Discovery Miles 25 510
Introductory Statistics Achieve access…
Stephen Kokoska
Mixed media product
R2,551
Discovery Miles 25 510
Machine Learning for Robotics…
Monica Bianchini, Milan Simic, …
Hardcover
R5,086
Discovery Miles 50 860
|