![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
What is the lexicon, what does it contain, and how is it structured? What principles determine the functioning of the lexicon as a component of natural language grammar? What role does lexical information play in linguistic theory? This accessible introduction aims to answer these questions, and explores the relation of the lexicon to grammar as a whole. It includes a critical overview of major theoretical frameworks, and puts forward a unified treatment of lexical structure and design. The text can be used for introductory and advanced courses, and for courses that touch upon different aspects of the lexicon, such as lexical semantics, lexicography, syntax, general linguistics, computational lexicology and ontology design. The book provides students with a set of tools which will enable them to work with lexical data for all kinds of purposes, including an abundance of exercises and in-class activities designed to ensure that students are actively engaged with the content and effectively acquire the necessary knowledge and skills they need.
Natural language is easy for people and hard for machines. For two generations, the tantalizing goal has been to get computers to handle human languages in ways that will be compelling and useful to people. Obstacles are many and legendary. Natural Language Processing: The PLNLP Approach describes one group's decade of research in pursuit of that goal. A very broad coverage NLP system, including a programming language (PLNLP) development tools, and analysis and synthesis components, was developed and incorporated into a variety of well-known practical applications, ranging from text critiquing (CRITIQUE) to machine translation (e.g. SHALT). This books represents the first published collection of papers describing the system and how it has been used. Twenty-six authors from nine countries contributed to this volume. Natural language analysis, in the PLNLP approach, is done is six stages that move smoothly from syntax through semantics into discourse. The initial syntactic sketch is provided by an Augmented Phrase Structure Grammar (APSG) that uses exclusively binary rules and aims to produce some reasonable analysis for any input string. Its `approximate' analysis passes to the reassignment component, which takes the default syntactic attachments and adjusts them, using semantic information obtained by parsing definitions and example sentences from machine-readable dictionaries. This technique is an example of one facet of the PLNLP approach: the use of natural language itself as a knowledge representation language -- an innovation that permits a wide variety of online text materials to be exploited as sources of semantic information. The next stage computes the intrasential argument structure and resolves all references, both NP- and VP-anaphora, that can be treated at this point in the processing. Subsequently, additional components, currently not so well developed as the earlier ones, handle the further disambiguation of word senses, the normalization of paraphrases, and the construction of a paragraph (discourse) model by joining sentential semantic graphs. Natural Language Processing: The PLNLP Approach acquaints the reader with the theory and application of a working, real-world, domain-free NLP system, and attempts to bridge the gap between computational and theoretical models of linguistic structure. It provides a valuable resource for students, teachers, and researchers in the areas of computational linguistics, natural processing, artificial intelligence, and information science.
This handbook presents an overview of the phenomenon of reference - the ability to refer to and pick out entities - which is an essential part of human language and cognition. In the volume's 21 chapters, international experts in the field offer a critical account of all aspects of reference from a range of theoretical perspectives. Chapters in the first part of the book are concerned with basic questions related to different types of referring expression and their interpretation. They address questions about the role of the speaker - including speaker intentions - and of the addressee, as well as the role played by the semantics of the linguistic forms themselves in establishing reference. This part also explores the nature of such concepts as definite and indefinite reference and specificity, and the conditions under which reference may fail. The second part of the volume looks at implications and applications, with chapters covering such topics as the acquisition of reference by children, the processing of reference both in the human brain and by machines. The volume will be of interest to linguists in a wide range of subfields, including semantics, pragmatics, computational linguistics, and psycho- and neurolinguistics, as well as scholars in related fields such as philosophy and computer science.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This book provides a state-of-the-art introduction to categorial grammar, a type of formal grammar which analyzes expressions as functions or according to a function-argument relationship. The book's focus is on linguistic, computational, and psycholinguistic aspects of logical categorial grammar, i.e. enriched Lambek Calculus. Glyn Morrill opens with the history and notation of Lambek Calculus and its application to syntax, semantics, and processing. Successive chapters extend the grammar to a number of significant syntactic and semantic properties of natural language. The final part applies Morrill's account to several current issues in processing and parsing, considered from both a psychological and a computational perspective. The book offers a rigorous and thoughtful study of one of the main lines of research in the formal and mathematical theory of grammar, and will be suitable for students of linguistics and cognitive science from advanced undergraduate level upwards.
The use of literature in second language teaching has been advocated for a number of years, yet despite this there have only been a limited number of studies which have sought to investigate its effects. Fewer still have focused on its potential effects as a model of spoken language or as a vehicle to develop speaking skills. Drawing upon multiple research studies, this volume fills that gap to explore how literature is used to develop speaking skills in second language learners. The volume is divided into two sections: literature and spoken language and literature and speaking skills. The first section focuses on studies exploring the use of literature to raise awareness of spoken language features, whilst the second investigates its potential as a vehicle to develop speaking skills. Each section contains studies with different designs and in various contexts including China, Japan and the UK. The research designs used mean that the chapters contain clear implications for classroom pedagogy and research in different contexts.
This book collects and introduces some of the best and most useful
work in practical lexicography. It has been designed as a resource
for students and scholars of lexicography and lexicology and to be
an essential reference for professional lexicographers. It focusses
on central issues in the field and covers topics hotly debated in
lexicography circles. After a full contextual introduction Thierry
Fontenelle divides the book into twelve parts - theoretical
perspectives, corpus design, lexicographical evidence, word senses
and polysemy, collocations and idioms, definitions, examples,
grammar and usage, bilingual lexicography, tools and methods,
semantic networks, and how dictionaries are used. The book is fully
referenced and indexed.
This is a down-to-earth, 'how to do it' textbook on the making of
dictionaries. Written by professional lexicographers with over
seventy years' experience between them, the book presents a
step-by-step course for the training of lexicographers in all
settings, including publishing houses, colleges, and universities
world-wide, and for the teaching of lexicography as an academic
discipline. It takes readers through the processes of designing,
collecting, and annotating a corpus of texts; shows how to analyse
the data in order to extract the relevant information; and
demonstrates how these findings are drawn together in the semantic,
grammatical, and pedagogic components that make up an entry. The
authors explain the relevance and application of recent linguistic
theories, such as prototype theory and frame semantics, and
describe the role of software in the manipulation of data and the
compilation of entries. They provide practical exercises at every
stage.
This important contribution to the Minimalist Program offers a
comprehensive theory of locality and new insights into phrase
structure and syntactic cartography. It unifies central components
of the grammar and increases the symmetry in syntax. Its central
hypothesis has broad empirical application and at the same time
reinforces the central premise of minimalism that language is an
optimal system.
This book explores the relationship between online second language (L2) communicative activities and formal language learning. It provides empirical evidence of the scale of L2 English use online, investigating the forms most commonly used, the activities likely to cause discomfort and the challenges experienced by users, and takes a critical approach to the nature of language online beyond the paradigms of 'written' versus 'spoken'. The author explores the possibilities for language teaching practices that engage with and integrate learners' L2 English online use, not only to support it but to use it as input for classroom learning and to enhance and exploit its incidental learning outcomes. This book will be of interest to postgraduate students and researchers interested in computer-mediated communication, online discourse and Activity Theory, while language teachers will find the practical ideas for lesson content invaluable as they strive to create a successful language learning community.
One of the challenges brought on by the digital revolution of the recent decades is the mechanism by which information carried by texts can be extracted in order to access its contents. The processing of named entities remains a very active area of research, which plays a central role in natural language processing technologies and their applications. Named entity recognition, a tool used in information extraction tasks, focuses on recognizing small pieces of information in order to extract information on a larger scale. The authors use written text and examples in French and English to present the necessary elements for the readers to familiarize themselves with the main concepts related to named entities and to discover the problems associated with them, as well as the methods available in practice for solving these issues.
When we speak, we configure the vocal tract which shapes the visible motions of the face and the patterning of the audible speech acoustics. Similarly, we use these visible and audible behaviors to perceive speech. This book showcases a broad range of research investigating how these two types of signals are used in spoken communication, how they interact, and how they can be used to enhance the realistic synthesis and recognition of audible and visible speech. The volume begins by addressing two important questions about human audiovisual performance: how auditory and visual signals combine to access the mental lexicon and where in the brain this and related processes take place. It then turns to the production and perception of multimodal speech and how structures are coordinated within and across the two modalities. Finally, the book presents overviews and recent developments in machine-based speech recognition and synthesis of AV speech.
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts.Ā Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models.Ā After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
Dynamical Grammar explores the consequences for language acquisition, language evolution, and linguistic theory of taking the underlying architecture of the language faculty to be that of a complex adaptive dynamical system. It contains the first results of a new and complex model of language acquisition which the authors have developed to measure how far language input is reflected in language output and thereby get a better idea of just how far the human language faculty is hard-wired.
This book adopts a corpus-based critical discourse analysis approach and examines a corpus of newspaper articles from Pakistani and Indian publications to gain comparative insights into the ideological construction of China's Belt and Road Initiative (BRI) and the China-Pakistan Economic Corridor (CPEC) within news discourses. This book contributes to the works on perceptions of BRI in English newspapers of India and Pakistan. A multi-billion-dollar project of BRI or the "One Belt One Road" (OBOR), CPEC symbolizes a vision for regional revival under China's economic leadership and clout. Propelled by the Chinese Premier's dream to revive the Chinese economy as well as to restructure and catalyze infrastructural development in Asia, BRI is aimed at connecting Asia via land and sea routes with Europe, Africa, and the Middle Eastern states.
The two-volume set LNCS 13396 and 13397 constitutes revised selected papers from the CICLing 2018 conference which took place in Hanoi, Vietnam, in March 2018.The total of 67 papers presented in the two volumes was carefully reviewed and selected from 181 submissions. The focus of the conference was on following topics such as computational linguistics and intelligent text and speech processing and others. The papers are organized in the following topical sections: General, Author profiling and authorship attribution, social network analysis, Information retrieval, information extraction, Lexical resources, Machine translation, Morphology, syntax, Semantics and text similarity, Sentiment analysis, Syntax and parsing, Text categorization and clustering, Text generation, and Text mining.
A landmark in linguistics and cognitive science. Ray Jackendoff proposes a new holistic theory of the relation between the sounds, structure, and meaning of language and their relation to mind and brain. Foundations of Language exhibits the most fundamental new thinking in linguistics since Noam Chomsky's Aspects of the Theory of Syntax in 1965 -- yet is readable, stylish, and accessible to a wide readership. Along the way it provides new insights on the evolution of language, thought, and communication.
The two-volume set LNCS 13396 and 13397 constitutes revised selected papers from the CICLing 2018 conference which took place in Hanoi, Vietnam, in March 2018.The total of 67 papers presented in the two volumes was carefully reviewed and selected from 181 submissions. The focus of the conference was on following topics such as computational linguistics and intelligent text and speech processing and others. The papers are organized in the following topical sections: General, Author profiling and authorship attribution, social network analysis, Information retrieval, information extraction, Lexical resources, Machine translation, Morphology, syntax, Semantics and text similarity, Sentiment analysis, Syntax and parsing, Text categorization and clustering, Text generation, and Text mining.
This case study-based textbook in multivariate analysis for advanced students in the humanities emphasizes descriptive, exploratory analyses of various types of datasets from a wide range of sub-disciplines, promoting the use of multivariate analysis and illustrating its wide applicability. Fields featured include, but are not limited to, historical agriculture, arts (music and painting), theology, and stylometrics (authorship issues). Most analyses are based on existing data, earlier analysed in published peer-reviewed papers. Four preliminary methodological and statistical chapters provide general technical background to the case studies. The multivariate statistical methods presented and illustrated include data inspection, several varieties of principal component analysis, correspondence analysis, multidimensional scaling, cluster analysis, regression analysis, discriminant analysis, and three-mode analysis. The bulk of the text is taken up by 14 case studies that lean heavily on graphical representations of statistical information such as biplots, using descriptive statistical techniques to support substantive conclusions. Each study features a description of the substantive background to the data, followed by discussion of appropriate multivariate techniques, and detailed results interpreted through graphical illustrations. Each study is concluded with a conceptual summary. Datasets in SPSS are included online.
In the not so distant future, we can expect a world where humans and robots coexist and interact with each other. For this to occur, we need to understand human traits, such as seeing, hearing, thinking, speaking, etc., and institute these traits in robots. The most essential feature necessary for robots to achieve is that of integrative multimedia understanding (IMU) which occurs naturally in humans. It allows us to assimilate pieces of information expressed through different modes such as speech, pictures, gestures, etc. The book describes how robots acquire traits like natural language understanding (NLU) as the central part of IMU. Mental image directed semantic theory (MIDST) is its core, and is based on the hypothesis that NLU is essentially the processing of mental image associated with natural language expressions, namely, mental-image based understanding (MBU). MIDST is intended to model omnisensory mental image in human and to afford a knowledge representation system in order for integrative management of knowledge subjective to cognitive mechanisms of intelligent entities such as humans and robots based on a mental image model visualized as 'Loci in Attribute Spaces' and its description language Lmd (mental image description language) to be employed for predicate logic with a systematic scheme for symbol-grounding. This language works as an interlingua among various kinds of information media, and has been applied to several versions of the intelligent system interlingual understanding model aiming at general system (IMAGES). Its latest version, i.e. conversation management system (CMS) simulates MBU and comprehends the user's intention through dialogue to find and solve problems, and finally, provides a response in text or animation. The book is aimed at researchers and students interested in artificial intelligence, robotics, and cognitive science. Based on philosophical considerations, the methodology will also have an appeal in linguistics, psychology, ontology, geography, and cartography. Key Features: Describes the methodology to provide robots with human-like capability of natural language understanding (NLU) as the central part of IMU Uses methodology that also relates to linguistics, psychology, ontology, geography, and cartography Examines current trends in machine translation
The two-volume set LNCS 13451 and 13452 constitutes revised selected papers from the CICLing 2019 conference which took place in La Rochelle, France, April 2019.The total of 95 papers presented in the two volumes was carefully reviewed and selected from 335 submissions. The book also contains 3 invited papers. The papers are organized in the following topical sections: General, Information extraction, Information retrieval, Language modeling, Lexical resources, Machine translation, Morphology, sintax, parsing, Name entity recognition, Semantics and text similarity, Sentiment analysis, Speech processing, Text categorization, Text generation, and Text mining.
This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have 'linguistics' in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings.
The book covers theoretical work, approaches, applications, and techniques for computational models of information, language, and reasoning. Computational and technological developments that incorporate natural language are proliferating. Adequate coverage of natural language processing in artificial intelligence encounters problems on developments of specialized computational approaches and algorithms. Many difficulties are due to ambiguities in natural language and dependency of interpretations on contexts and agents. Classical approaches proceed with relevant updates, and new developments emerge in theories of formal and natural languages, computational models of information and reasoning, and related computerized applications. Its focus is on computational processing of human language and relevant medium languages, which can be theoretically formal, or for programming and specification of computational systems. The goal is to promote intelligent natural language processing, along with models of computation, language, reasoning, and other cognitive processes.
This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 - to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects.
Computational semantics is the art and science of computing meaning in natural language. The meaning of a sentence is derived from the meanings of the individual words in it, and this process can be made so precise that it can be implemented on a computer. Designed for students of linguistics, computer science, logic and philosophy, this comprehensive text shows how to compute meaning using the functional programming language Haskell. It deals with both denotational meaning (where meaning comes from knowing the conditions of truth in situations), and operational meaning (where meaning is an instruction for performing cognitive action). Including a discussion of recent developments in logic, it will be invaluable to linguistics students wanting to apply logic to their studies, logic students wishing to learn how their subject can be applied to linguistics, and functional programmers interested in natural language processing as a new application area. |
You may like...
Spelling and Writing Words - Theoretical…
Cyril Perret, Thierry Olive
Hardcover
R3,400
Discovery Miles 34 000
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
R884
Discovery Miles 8 840
Artificial Intelligence for Healthcare…
Boris Galitsky, Saveli Goldberg
Paperback
R2,991
Discovery Miles 29 910
From Data to Evidence in English…
Carla Suhr, Terttu Nevalainen, …
Hardcover
R4,797
Discovery Miles 47 970
|