![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This collection of papers takes linguists to the leading edge of techniques in generative lexicon theory, the linguistic composition methodology that arose from the imperative to provide a compositional semantics for the contextual modifications in meaning that emerge in real linguistic usage. Today's growing shift towards distributed compositional analyses evinces the applicability of GL theory, and the contributions to this volume, presented at three international workshops (GL-2003, GL-2005 and GL-2007) address the relationship between compositionality in language and the mechanisms of selection in grammar that are necessary to maintain this property. The core unresolved issues in compositionality, relating to the interpretation of context and the mechanisms of selection, are treated from varying perspectives within GL theory, including its basic theoretical mechanisms and its analytical viewpoint on linguistic phenomena.
This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue is the ultimate challenge in natural language processing, and the key to a wide range of exciting applications. The breadth and depth of coverage of this book makes it suitable as a reference and overview of the state of the field for researchers in Computational Linguistics, Semantics, Computer Science, Cognitive Science, and Artificial Intelligence. "
This book is the first of its kind in creating a snapshot of the state of the art in the use of technology in translating creative texts. The book gives an overview of a wide range of subjects that are developing rapidly and are likely to become substantially more important to both researchers and practitioners over the next few years. It includes work by researchers at all career stages, with strong representation by early and mid-career researchers who are likely to go on to shape the field in the coming years. It addresses active debates in the field (i.e. whether technology can/should be used in the translation of creative texts) from the perspective of data, rather than conjecture. Events and publications in the same field are increasing in number rapidly, along with the number of researchers expressing an interest in the topic. Therefore, this book is well placed to become influential in this development.
This book describes effective methods for automatically analyzing a sentence, based on the syntactic and semantic characteristics of the elements that form it. To tackle ambiguities, the authors use selectional preferences (SP), which measure how well two words fit together semantically in a sentence. Today, many disciplines require automatic text analysis based on the syntactic and semantic characteristics of language and as such several techniques for parsing sentences have been proposed. Which is better? In this book the authors begin with simple heuristics before moving on to more complex methods that identify nouns and verbs and then aggregate modifiers, and lastly discuss methods that can handle complex subordinate and relative clauses. During this process, several ambiguities arise. SP are commonly determined on the basis of the association between a pair of words. However, in many cases, SP depend on more words. For example, something (such as grass) may be edible, depending on who is eating it (a cow?). Moreover, things such as popcorn are usually eaten at the movies, and not in a restaurant. The authors deal with these phenomena from different points of view.
A comprehensive reference book for detailed explanations for every algorithm and techniques related to the transformers. 60+ transformer architectures covered in a comprehensive manner. A book for understanding how to apply the transformer techniques in speech, text, time series, and computer vision. Practical tips and tricks for each architecture and how to use it in the real world. Hands-on case studies and code snippets for theory and practical real-world analysis using the tools and libraries, all ready to run in Google Colab.
This book is about the role of knowledge in information systems. Knowledge is usually articulated and exchanged through human language(s). In this sense, language can be seen as the most natural vehicle to convey our concepts, whose meanings are usually intermingled, grouped and organized according to shared criteria, from simple perceptions ( every tree has a stem ) and common sense ( unsupported objects fall ) to complex social conventions ( a tax is a fee charged by a government on a product, income, or activity ). But what is natural for a human being turns out to be extremely difficult for machines: machines need to be instilled with knowledge and suitably equipped with logical and statistical algorithms to reason over it. Computers can t represent the external world and communicate their representations as effectively as humans do: ontologies and NLP have been invented to face this problem: in particular, integrating ontologies with (possibly multi-lingual) computational lexical resources is an essential requirement to make human meanings understandable by machines. This book explores the advancements in this integration, from the most recent steps in building the necessary infrastructure, i.e. the Semantic Web, to the different knowledge contents that can be analyzed, encoded and transferred (multimedia, emotions, events, etc.) through it. The work aims at presenting the progress in the field of integrating ontologies and lexicons: together, they constitute the essential technology for adequately represent, elicit and exchange knowledge contents in information systems, web services, text processing and several other domains of application.
The computational approach of this book is aimed at simulating the human ability to understand various kinds of phrases with a novel metaphoric component. That is, interpretations of metaphor as literal paraphrases are based on literal meanings of the metaphorically used words. This method distinguishes itself from statistical approaches, which in general do not account for novel usages, and from efforts directed at metaphor constrained to one type of phrase or to a single topic domain. The more interesting and novel metaphors appear to be based on concepts generally represented as nouns, since such concepts can be understood from a variety of perspectives. The core of the process of interpreting nominal concepts is to represent them in such a way that readers or hearers can infer which aspect(s) of the nominal concept is likely to be intended to be applied to its interpretation. These aspects are defined in terms of verbal and adjectival predicates. A section on the representation and processing of part-sentence verbal metaphor will therefore also serve as preparation for the representation of salient aspects of metaphorically used nouns. As the ability to process metaphorically used verbs and nouns facilitates the interpretation of more complex tropes, computational analysis of two other kinds of metaphorically based expressions are outlined: metaphoric compound nouns, such as "idea factory" and, together with the representation of inferences, modified metaphoric idioms, such as "Put the cat back into the bag".
Language ability is a unique human trait, and it is indispensable throughout the human life cycle. Blockchain on the other hand, is an innovation that will transform production relationships, change collaboration models and distribution of benefits between people. Language and blockchain seem to have no intersection, yet they are bewilderingly similar in certain ways.When Language Meets Blockchain leads us into an exploratory journey to discover the possibilities of integrating blockchain technology with the language services industry. The author discusses how blockchain technology enables translators to realise their full potential and describes how the role of language can be elevated from a general tool to a driving force through a new concept called Cross-Linguistic Capability. This is a concept that will have very intriguing and beneficial implications for global economic activities.It is demonstrated that language is more than just a tool, it is also a resource and a form of capability. This presents opportunities for cross-linguistic and cross-cultural communications in the era of blockchain, to enable the convergence of linguistic capability with blockchain technology and artificial intelligence. The book's perspective on how the language services industry could adapt to times to embrace blockchain technology for industrial transformation, is both forwarding-looking and value enhancing.
The last decades have witnessed a renewed interest in near-synonymy. In particular, recent distributional corpus-based approaches used for semantic analysis have successfully uncovered subtle distinctions in meaning between near-synonyms. However, most studies have dealt with the semantic structure of sets of near-synonyms from a synchronic perspective, while their diachronic evolution generally has been neglected. Against this backdrop, the aim of this book is to examine five adjectival near-synonyms in the history of American English from the understudied semantic domain of SMELL: fragrant, perfumed, scented, sweet-scented, and sweet-smelling. Their distribution is analyzed across a wide range of contexts, including semantic, morphosyntactic, and stylistic ones, since distributional patterns of this type serve as a proxy for semantic (dis)similarity. The data is submitted to various univariate and multivariate statistical techniques, making it possible to uncover fine-grained (dis)similarities among the near-synonyms, as well as possible changes in their prototypical structures. The book sheds valuable light on the diachronic development of lexical near-synonyms, a dimension that has up to now been relatively disregarded.
The Routledge Handbook of Corpus Linguistics 2e provides an updated overview of a dynamic and rapidly growing area with a widely applied methodology. Over a decade on from the first edition of the Handbook, this collection of 47 chapters from experts in key areas offers a comprehensive introduction to both the development and use of corpora as well as their ever-evolving applications to other areas, such as digital humanities, sociolinguistics, stylistics, translation studies, materials design, language teaching and teacher development, media discourse, discourse analysis, forensic linguistics, second language acquisition and testing. The new edition updates all core chapters and includes new chapters on corpus linguistics and statistics, digital humanities, translation, phonetics and phonology, second language acquisition, social media and theoretical perspectives. Chapters provide annotated further reading lists and step-by-step guides as well as detailed overviews across a wide range of themes. The Handbook also includes a wealth of case studies that draw on some of the many new corpora and corpus tools that have emerged in the last decade. Organised across four themes, moving from the basic start-up topics such as corpus building and design to analysis, application and reflection, this second edition remains a crucial point of reference for advanced undergraduates, postgraduates and scholars in applied linguistics.
This book focuses on the next generation optical networks as well as mobile communication technologies. The reader will find chapters on Cognitive Optical Network, 5G Cognitive Wireless, LTE, Data Analysis and Natural Language Processing. It also presents a comprehensive view of the enhancements and requirements foreseen for Machine Type Communication. Moreover, some data analysis techniques and Brazilian Portuguese natural language processing technologies are also described here.
This innovative book develops a formal computational theory of writing systems and relates it to psycholinguistic results. Drawing on case studies of writing systems around the world, it offers specific proposals about the linguistic objects that are represented by orthographic elements and the formal constraints that hold of the mapping relation between them. Based on the insights gained, it posits a new taxonomy of writing systems. The book will be of interest to students and researchers in theoretical and computational linguistics, the psycholinguistics of reading and writing, and speech technology.
In today's unsafe and increasingly wired world cryptology plays a vital role in protecting communication channels, databases, and software from unwanted intruders. This revised and extended third edition of the classic reference work on cryptology now contains many new technical and biographical details. The first part treats secret codes and their uses - cryptography. The second part deals with the process of covertly decrypting a secret code - cryptanalysis, where particular advice on assessing methods is given. The book presupposes only elementary mathematical knowledge. Spiced with a wealth of exciting, amusing, and sometimes personal stories from the history of cryptology, it will also interest general readers.
Posthumanism and Deconstructing Arguments: Corpora and Digitally-driven Critical Analysis presents a new and practical approach in Critical Discourse Studies. Providing a data-driven and ethically-based method for the examination of arguments in the public sphere, this ground-breaking book: Highlights how the reader can evaluate arguments from points of view other than their own; Demonstrates how digital tools can be used to generate 'ethical subjectivities' from large numbers of dissenting voices on the world-wide-web; Draws on ideas from posthumanist philosophy as well as from Jacques Derrida, Gilles Deleuze and Felix Guattari for theorising these subjectivities; Showcases a critical deconstructive approach, using different corpus linguistic programs such as AntConc, WMatrix and Sketchengine. Posthumanism and Deconstructing Arguments is essential reading for lecturers and researchers with an interest in critical discourse studies, critical thinking, corpus linguistics and digital humanities.
Universal codes efficiently compress sequences generated by stationary and ergodic sources with unknown statistics, and they were originally designed for lossless data compression. In the meantime, it was realized that they can be used for solving important problems of prediction and statistical analysis of time series, and this book describes recent results in this area. The first chapter introduces and describes the application of universal codes to prediction and the statistical analysis of time series; the second chapter describes applications of selected statistical methods to cryptography, including attacks on block ciphers; and the third chapter describes a homogeneity test used to determine authorship of literary texts. The book will be useful for researchers and advanced students in information theory, mathematical statistics, time-series analysis, and cryptography. It is assumed that the reader has some grounding in statistics and in information theory.
Mathematical Linguistics introduces the mathematical foundations of linguistics to computer scientists, engineers, and mathematicians interested in natural language processing. The book presents linguistics as a cumulative body of knowledge from the ground up: no prior knowledge of linguistics is assumed. Previous textbooks in this area concentrate on syntax and semantics - this comprehensive volume covers an extremely rich array of topics also including phonology and morphology, probabilistic approaches, complexity, learnability, and the analysis of speech and handwriting. As the first textbook of its kind, this book is useful for those in information science (information retrieval and extraction, search engines) and in natural language technologies (speech recognition, optical character recognition, HCI). Exercises suitable for the advanced reader are included, as well as suggestions for further reading and an extensive bibliography.
Handbook of Artificial Intelligence in Biomedical Engineering focuses on recent AI technologies and applications that provide some very promising solutions and enhanced technology in the biomedical field. Recent advancements in computational techniques, such as machine learning, Internet of Things (IoT), and big data, accelerate the deployment of biomedical devices in various healthcare applications. This volume explores how artificial intelligence (AI) can be applied to these expert systems by mimicking the human expert's knowledge in order to predict and monitor the health status in real time. The accuracy of the AI systems is drastically increasing by using machine learning, digitized medical data acquisition, wireless medical data communication, and computing infrastructure AI approaches, helping to solve complex issues in the biomedical industry and playing a vital role in future healthcare applications. The volume takes a multidisciplinary perspective of employing these new applications in biomedical engineering, exploring the combination of engineering principles with biological knowledge that contributes to the development of revolutionary and life-saving concepts.
This is the first volume that brings together research and practice from academic and industry settings and a combination of human and machine translation evaluation. Its comprehensive collection of papers by leading experts in human and machine translation quality and evaluation who situate current developments and chart future trends fills a clear gap in the literature. This is critical to the successful integration of translation technologies in the industry today, where the lines between human and machine are becoming increasingly blurred by technology: this affects the whole translation landscape, from students and trainers to project managers and professionals, including in-house and freelance translators, as well as, of course, translation scholars and researchers. The editors have broad experience in translation quality evaluation research, including investigations into professional practice with qualitative and quantitative studies, and the contributors are leading experts in their respective fields, providing a unique set of complementary perspectives on human and machine translation quality and evaluation, combining theoretical and applied approaches.
This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.
The volume discusses a multitude of ways in which CALL can serve to develop and support broadly conceived issues-of background in language education. The individual chapters explore a number of areas in which CALL techniques and tools enhance language instruction. The issues reported on comprise working with mature language learners, developing civic education, ICT affordances for ESP, professional training for translators, interpreters and crowdsourcing opportunities. Other contributions center around CALL-related resources, CAPT metacompetence and blended-learning paradigms as well as exploring cultural and linguistic issues in online exchanges.
The application of deep learning methods to problems in natural language processing has generated significant progress across a wide range of natural language processing tasks. For some of these applications, deep learning models now approach or surpass human performance. While the success of this approach has transformed the engineering methods of machine learning in artificial intelligence, the significance of these achievements for the modelling of human learning and representation remains unclear. Deep Learning and Linguistic Representation looks at the application of a variety of deep learning systems to several cognitively interesting NLP tasks. It also considers the extent to which this work illuminates our understanding of the way in which humans acquire and represent linguistic knowledge. Key Features: combines an introduction to deep learning in AI and NLP with current research on Deep Neural Networks in computational linguistics. is self-contained and suitable for teaching in computer science, AI, and cognitive science courses; it does not assume extensive technical training in these areas. provides a compact guide to work on state of the art systems that are producing a revolution across a range of difficult natural language tasks.
The application of deep learning methods to problems in natural language processing has generated significant progress across a wide range of natural language processing tasks. For some of these applications, deep learning models now approach or surpass human performance. While the success of this approach has transformed the engineering methods of machine learning in artificial intelligence, the significance of these achievements for the modelling of human learning and representation remains unclear. Deep Learning and Linguistic Representation looks at the application of a variety of deep learning systems to several cognitively interesting NLP tasks. It also considers the extent to which this work illuminates our understanding of the way in which humans acquire and represent linguistic knowledge. Key Features: combines an introduction to deep learning in AI and NLP with current research on Deep Neural Networks in computational linguistics. is self-contained and suitable for teaching in computer science, AI, and cognitive science courses; it does not assume extensive technical training in these areas. provides a compact guide to work on state of the art systems that are producing a revolution across a range of difficult natural language tasks.
This book applies linguistic analysis to the poetry of Emeritus Professor Edwin Thumboo, a Singaporean poet and leading figure in Commonwealth literature. The work explores how the poet combines grammar and metaphor to create meaning, making the reader aware of the linguistic resources developed by Thumboo as the basis for his unique technique. The author approaches the poems from a functional linguistic perspective, investigating the multiple layers of meaning and metaphor that go into producing these highly textured, grammatically intricate verbal works of art. The approach is based on the Systemic Functional Theory, which aids the study of how the poet uses language (grammar) to craft his text in a playful way that reflects a love of the language. The multilingual and multicultural experiences of the poet are considered to have contributed to his uniquely creative use of language. This work demonstrates how the Systemic Functional Theory, with its emphasis on exploring the semogenic (meaning-making) power of language, provides the perspective we need to better understand poets' works as intentional acts of meaning. Readers will discover how the works of Edwin Thumboo illustrate well a point made by Barthes, who noted that "Bits of code, formulae, rhythmic models, fragments of social languages, etc. pass into the text and are redistributed within it, for there is always language before and around the text." With a focus on meaning, this functional analysis of poetry offers an insightful look at the linguistic basis of Edwin Thumboo's poetic technique. The work will appeal to scholars with an interest in linguistic analysis and poetry from the Commonwealth and new literature, and it can also be used to support courses on literary stylistics or text linguistics.
This book presents a comprehensive overview of semi-supervised approaches to dependency parsing. Having become increasingly popular in recent years, one of the main reasons for their success is that they can make use of large unlabeled data together with relatively small labeled data and have shown their advantages in the context of dependency parsing for many languages. Various semi-supervised dependency parsing approaches have been proposed in recent works which utilize different types of information gleaned from unlabeled data. The book offers readers a comprehensive introduction to these approaches, making it ideally suited as a textbook for advanced undergraduate and graduate students and researchers in the fields of syntactic parsing and natural language processing.
The lexicon is now a major focus of research in computational linguistics and natural language processing (NLP), as more linguistic theories concentrate on the lexicon and as the acquisition of an adequate vocabulary has become the chief bottleneck in developing practical NLP systems. This collection describes techniques of lexical representation within a unification-based framework and their linguistic application, concentrating on the issue of structuring the lexicon using inheritance and defaults. Topics covered include typed feature structures, default unification, lexical rules, multiple inheritance and non-monotonic reasoning. The contributions describe both theoretical results and implemented languages and systems, including DATR, the Stuttgart TFS and ISSCO's ELU. This book arose out of a workshop on default inheritance in the lexicon organized as a part of the Esprit ACQUILEX project on computational lexicography. Besides the contributed papers mentioned above, it contains a detailed description of the ACQUILEX lexical knowledge base (LKB) system and its use in the representation of lexicons extracted semi-automatically from machine-readable dictionaries. |
You may like...
Qualification of Inspection Procedures
E. Borloo, P. Lemaitre
Hardcover
R7,930
Discovery Miles 79 300
High-Pressure Shock Compression of…
J.R. Asay, M. Shahinpoor
Hardcover
R8,984
Discovery Miles 89 840
Cutaneous Haptic Feedback in Robotic…
Claudio Pacchierotti
Hardcover
Behavioral Modeling for Embedded Systems…
Luis Gomes, Joao M Fernandes
Hardcover
R4,644
Discovery Miles 46 440
Cognitive Aspects of Visual Languages…
D.E. Mahling, F. Arefi, …
Hardcover
R4,863
Discovery Miles 48 630
|