![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
This 1992 collection takes the exciting step of examining natural language phenomena from the perspective of both computational linguistics and formal semantics. Computational linguistics has until now been primarily concerned with the construction of computational models for handling the complexities of linguistic form, but has not tackled the questions of representing or computing meaning. Formal semantics, on the other hand, has attempted to account for the relations between forms and meanings, without necessarily attending to computational concerns. The book introduces the reader to the two disciplines and considers the prospects for the more unified and comprehensive computational theory of language which might obtain from their amalgamation. Of great interest to those working in the fields of computation, logic, semantics, artificial intelligence and linguistics generally.
Semantic interpretation and the resolution of ambiguity presents an important advance in computer understanding of natural language. While parsing techniques have been greatly improved in recent years, the approach to semantics has generally improved in recent years, the approach to semantics has generally been ad hoc and had little theoretical basis. Graeme Hirst offers a new, theoretically motivated foundation for conceptual analysis by computer, and shows how this framework facilitates the resolution of lexical and syntactic ambiguities. His approach is interdisciplinary, drawing on research in computational linguistics, artificial intelligence, montague semantics, and cognitive psychology.
A primary problem in the area of natural language processing has been that of semantic analysis. This book aims to look at the semantics of natural languages in context. It presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics. This results in distinct advantages for relating the semantic analysis of a sentence to its context. A key feature is the clear separation of the lexical entries that represent the domain-specific linguistic information from the semantic interpreter that performs the analysis. The criteria for defining the lexical entries are firmly grounded in current linguistic theories, facilitating integration with existing parsers. This approach has been implemented and tested in Prolog on a domain for physics word problems and full details of the algorithms and code are presented. Semantic Processing for Finite Domains will appeal to postgraduates and researchers in computational linguistics, and to industrial groups specializing in natural language processing.
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computational linguistics, artificial intelligence and the philosophy of language will want to read this book.
Die Entwicklung und Verbreitung von Systemen fur maschinelles UEbersetzen bewirkt massive Transformationsprozesse in der Sprachdienstleistungsbranche. Die 'Maschinisierung' von Translation fuhrt nicht nur zu Umwalzungen innerhalb des UEbersetzungsmarktes, sondern stellt uns auch vor die grundlegende Frage: Was ist 'UEbersetzen', wenn eine Maschine menschliche Sprache ubersetzt? Diese Arbeit widmet sich diesem Problem aus der Perspektive der Translationswissenschaft und der Techniksoziologie. Im Fokus stehen Translationskonzepte in der Computerlinguistik, die aus einer Wechselwirkung zwischen sozialer Konstruktion und technischen Gegebenheiten resultieren. Der UEbersetzungsbegriff von Computerlinguist:innen orientiert sich an der Mechanik der Maschine, wodurch ein Spannungsverhaltnis mit den Paradigmen der Humantranslation entsteht.
In everyday communication, Europe's citizens, business partners and politicians are inevitably confronted with language barriers. Language technology has the potential to overcome these barriers and to provide innovative interfaces to technologies and knowledge. This document presents a Strategic Research Agenda for Multilingual Europe 2020. The agenda was prepared by META-NET, a European Network of Excellence. META-NET consists of 60 research centres in 34 countries, who cooperate with stakeholders from economy, government agencies, research organisations, non-governmental organisations, language communities and European universities. META-NET's vision is high-quality language technology for all European languages. "The research carried out in the area of language technology is of utmost importance for the consolidation of Portuguese as a language of global communication in the information society." - Dr. Pedro Passos Coelho (Prime-Minister of Portugal) "It is imperative that language technologies for Slovene are developed systematically if we want Slovene to flourish also in the future digital world." - Dr. Danilo Turk (President of the Republic of Slovenia) "For such small languages like Latvian keeping up with the ever increasing pace of time and technological development is crucial. The only way to ensure future existence of our language is to provide its users with equal opportunities as the users of larger languages enjoy. Therefore being on the forefront of modern technologies is our opportunity." - Valdis Dombrovskis (Prime Minister of Latvia) "Europe's inherent multilingualism and our scientific expertise are the perfect prerequisites for significantly advancing the challenge that language technology poses. META-NET opens up new opportunities for the development of ubiquitous multilingual technologies." - Prof. Dr. Annette Schavan (German Minister of Education and Research)
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
Language, Cognition, and Human Nature collects together for the first time much of Steven Pinker's most influential scholarly work on language and cognition. Pinker's seminal research explores the workings of language and its connections to cognition, perception, social relationships, child development, human evolution, and theories of human nature. This eclectic collection spans Pinker's thirty-year career, exploring his favorite themes in greater depth and scientific detail. It includes thirteen of Pinker's classic articles, ranging over topics such as language development in children, mental imagery, the recognition of shapes, the computational architecture of the mind, the meaning and uses of verbs, the evolution of language and cognition, the nature-nurture debate, and the logic of innuendo and euphemism. Each outlines a major theory or takes up an argument with another prominent scholar, such as Stephen Jay Gould, Noam Chomsky, or Richard Dawkins. Featuring a new introduction by Pinker that discusses his books and scholarly work, this collection reflects essential contributions to cognitive science by one of our leading thinkers and public intellectuals.
In diesem Open-Access-Buch wird mithilfe eines grossangelegten Online-Experiments untersucht, wie sich die Anzeige von Zitationen oder Downloads auf die Relevanzbewertung in akademischen Suchsystemen auswirkt. Bei der Suche nach Informationen verwenden Menschen diverse Kriterien, anhand derer sie die Relevanz der Suchergebnisse bewerten. In diesem Buch wird erstmals eine systematische UEbersicht uber die Einflusse im Prozess der Relevanzbewertung von Suchergebnissen in akademischen Suchsystemen aufgezeigt. Zudem wird ein anspruchsvolles und komplexes Methodenframework zur experimentellen Untersuchung von Relevanzkriterien vorgestellt. Dieses eignet sich fur die weitergehende Erforschung von Relevanzkriterien im informationswissenschaftlichen Bereich.
Recent developments in artificial intelligence, especially neural network and deep learning technology, have led to rapidly improving performance in voice assistants such as Siri and Alexa. Over the next few years, capability will continue to improve and become increasingly personalised. Today's voice assistants will evolve into virtual personal assistants firmly embedded within our everyday lives. Told through the view of a fictitious personal assistant called Cyba, this book provides an accessible but detailed overview of how a conversational voice assistant works, especially how it understands spoken language, manages conversations, answers questions and generates responses. Cyba explains through examples and diagrams the neural network technology underlying speech recognition and synthesis, natural language understanding, knowledge representation, conversation management, language translation and chatbot technology. Cyba also explores the implications of this rapidly evolving technology for security, privacy and bias, and gives a glimpse of future developments. Cyba's website can be found at HeyCyba.com.
Experimental syntax is an area that is rapidly growing as linguistic research becomes increasingly focused on replicable language data, in both fieldwork and laboratory environments. The first of its kind, this handbook provides an in-depth overview of current issues and trends in this field, with contributions from leading international scholars. It pays special attention to sentence acceptability experiments, outlining current best practices in conducting tests, and pointing out promising new avenues for future research. Separate sections review research results from the past 20 years, covering specific syntactic phenomena and language types. The handbook also outlines other common psycholinguistic and neurolinguistic methods for studying syntax, comparing and contrasting them with acceptability experiments, and giving useful perspectives on the interplay between theoretical and experimental linguistics. Providing an up-to-date reference on this exciting field, it is essential reading for students and researchers in linguistics interested in using experimental methods to conduct syntactic research.
This book is the first dedicated to linguistic parsing - the processing of natural language according to the rules of a formal grammar - in the Minimalist Program. While Minimalism has been at the forefront of generative grammar for several decades, it often remains inaccessible to computer scientists and others in adjacent fields. This volume makes connections with standard computational architectures, provides efficient implementations of some fundamental minimalist accounts of syntax, explores implementations of recent theoretical proposals, and explores correlations between posited structures and measures of neural activity during human language comprehension. These studies will appeal to graduate students and researchers in formal syntax, computational linguistics, psycholinguistics, and computer science.
This handbook offers a comprehensive overview of the field of Persian linguistics, discusses its development, and captures critical accounts of cutting edge research within its major subfields, as well as outlining current debates and suggesting productive lines of future research. Leading scholars in the major subfields of Persian linguistics examine a range of topics split into six thematic parts. Following a detailed introduction from the editors, the volume begins by placing Persian in its historical and typological context in Part I. Chapters in Part II examine topics relating to phonetics and phonology, while Part III looks at approaches to and features of Persian syntax. The fourth part of the volume explores morphology and lexicography, as well as the work of the Academy of Persian Language and Literature. Part V, language and people, covers topics such as language contact and teaching Persian as a foreign language, while the final part examines psycho- neuro-, and computational linguistics. The volume will be an essential resource for all scholars with an interest in Persian language and linguistics.
When we speak, we configure the vocal tract which shapes the visible motions of the face and the patterning of the audible speech acoustics. Similarly, we use these visible and audible behaviors to perceive speech. This book showcases a broad range of research investigating how these two types of signals are used in spoken communication, how they interact, and how they can be used to enhance the realistic synthesis and recognition of audible and visible speech. The volume begins by addressing two important questions about human audiovisual performance: how auditory and visual signals combine to access the mental lexicon and where in the brain this and related processes take place. It then turns to the production and perception of multimodal speech and how structures are coordinated within and across the two modalities. Finally, the book presents overviews and recent developments in machine-based speech recognition and synthesis of AV speech.
This book provides a comprehensive account of the role of recursion in language in two distinct but interconnected ways. First, David J. Lobina examines how recursion applies at different levels within a full description of natural language. Specifically, he identifies and evaluates recursion as: a) a central property of the computational system underlying the faculty of language; b) a possible feature of the derivations yielded by this computational system; c) a global characteristic of the structures generated by the language faculty; and d) a probable factor in the parsing operations employed during the processing of recursive structures. Second, the volume orders these different levels into a tripartite explanatory framework. According to this framework, the investigation of any particular cognitive domain must begin by first outlining what sort of mechanical procedure underlies the relevant capacity (including what sort of structures it generates). Only then, the author argues, can we properly investigate its implementation, both at the level of abstract computations typical of competence-level analyses, and at the level of the real-time processing of behaviour.
This book explores the interaction between corpus stylistics and translation studies. It shows how corpus methods can be used to compare literary texts to their translations, through the analysis of Joseph Conrad's Heart of Darkness and four of its Italian translations. The comparison focuses on stylistic features related to the major themes of Heart of Darkness. By combining quantitative and qualitative techniques, Mastropierro discusses how alterations to the original's stylistic features can affect the interpretation of the themes in translation. The discussion illuminates the manipulative effects that translating can have on the reception of a text, showing how textual alterations can trigger different readings. This book advances the multidisciplinary dialogue between corpus linguistics and translation studies and is a valuable resource for students and researchers interested in the application of corpus approaches to stylistics and translation.
This book is open access and available on www.bloomsburycollections.com. It is funded by Knowledge Unlatched. Corpus linguistics has much to offer history, being as both disciplines engage so heavily in analysis of large amounts of textual material. This book demonstrates the opportunities for exploring corpus linguistics as a method in historiography and the humanities and social sciences more generally. Focussing on the topic of prostitution in 17th-century England, it shows how corpus methods can assist in social research, and can be used to deepen our understanding and comprehension. McEnery and Baker draw principally on two sources - the newsbook Mercurius Fumigosis and the Early English Books Online Corpus. This scholarship on prostitution and the sex trade offers insight into the social position of women in history.
This handbook compares the main analytic frameworks and methods of contemporary linguistics. It offers a unique overview of linguistic theory, revealing the common concerns of competing approaches. By showing their current and potential applications it provides the means by which linguists and others can judge what are the most useful models for the task in hand. Distinguished scholars from all over the world explain the rationale and aims of over thirty explanatory approaches to the description, analysis, and understanding of language. Each chapter considers the main goals of the model; the relation it proposes from between lexicon, syntax, semantics, pragmatics, and phonology; the way it defines the interactions between cognition and grammar; what it counts as evidence; and how it explains linguistic change and structure. The Oxford Handbook of Linguistic Analysis offers an indispensable guide for everyone researching any aspect of language including those in linguistics, comparative philology, cognitive science, developmental philology, cognitive science, developmental psychology, computational science, and artificial intelligence. This second edition has been updated to include seven new chapters looking at linguistic units in language acquisition, conversation analysis, neurolinguistics, experimental phonetics, phonological analysis, experimental semantics, and distributional typology.
This book is about a new approach in the field of computational linguistics related to the idea of constructing n-grams in non-linear manner, while the traditional approach consists in using the data from the surface structure of texts, i.e., the linear structure.In this book, we propose and systematize the concept of syntactic n-grams, which allows using syntactic information within the automatic text processing methods related to classification or clustering. It is a very interesting example of application of linguistic information in the automatic (computational) methods. Roughly speaking, the suggestion is to follow syntactic trees and construct n-grams based on paths in these trees. There are several types of non-linear n-grams; future work should determine, which types of n-grams are more useful in which natural language processing (NLP) tasks. This book is intended for specialists in the field of computational linguistics. However, we made an effort to explain in a clear manner how to use n-grams; we provide a large number of examples, and therefore we believe that the book is also useful for graduate students who already have some previous background in the field.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This book is an advanced introduction to semantics that presents this crucial component of human language through the lens of the 'Meaning-Text' theory - an approach that treats linguistic knowledge as a huge inventory of correspondences between thought and speech. Formally, semantics is viewed as an organized set of rules that connect a representation of meaning (Semantic Representation) to a representation of the sentence (Deep-Syntactic Representation). The approach is particularly interesting for computer assisted language learning, natural language processing and computational lexicography, as our linguistic rules easily lend themselves to formalization and computer applications. The model combines abstract theoretical constructions with numerous linguistic descriptions, as well as multiple practice exercises that provide a solid hands-on approach to learning how to describe natural language semantics.
This is a down-to-earth, 'how to do it' textbook on the making of
dictionaries. Written by professional lexicographers with over
seventy years' experience between them, the book presents a
step-by-step course for the training of lexicographers in all
settings, including publishing houses, colleges, and universities
world-wide, and for the teaching of lexicography as an academic
discipline. It takes readers through the processes of designing,
collecting, and annotating a corpus of texts; shows how to analyse
the data in order to extract the relevant information; and
demonstrates how these findings are drawn together in the semantic,
grammatical, and pedagogic components that make up an entry. The
authors explain the relevance and application of recent linguistic
theories, such as prototype theory and frame semantics, and
describe the role of software in the manipulation of data and the
compilation of entries. They provide practical exercises at every
stage.
This important contribution to the Minimalist Program offers a
comprehensive theory of locality and new insights into phrase
structure and syntactic cartography. It unifies central components
of the grammar and increases the symmetry in syntax. Its central
hypothesis has broad empirical application and at the same time
reinforces the central premise of minimalism that language is an
optimal system. |
![]() ![]() You may like...
Foundation Models for Natural Language…
Gerhard PaaĂź, Sven Giesselbach
Hardcover
Recent Developments in Fuzzy Logic and…
Shahnaz N. Shahbazova, Michio Sugeno, …
Hardcover
R6,468
Discovery Miles 64 680
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,740
Discovery Miles 47 400
Linguistic Inquiries into Donald…
Ulrike Schneider, Matthias Eitelmann
Hardcover
R4,138
Discovery Miles 41 380
|