![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
This book presents studies involving algorithms in the machine learning paradigms. It discusses a variety of learning problems with diverse applications, including prediction, concept learning, explanation-based learning, case-based (exemplar-based) learning, statistical rule-based learning, feature extraction-based learning, optimization-based learning, quantum-inspired learning, multi-criteria-based learning and hybrid intelligence-based learning.
The computational approach of this book is aimed at simulating the human ability to understand various kinds of phrases with a novel metaphoric component. That is, interpretations of metaphor as literal paraphrases are based on literal meanings of the metaphorically used words. This method distinguishes itself from statistical approaches, which in general do not account for novel usages, and from efforts directed at metaphor constrained to one type of phrase or to a single topic domain. The more interesting and novel metaphors appear to be based on concepts generally represented as nouns, since such concepts can be understood from a variety of perspectives. The core of the process of interpreting nominal concepts is to represent them in such a way that readers or hearers can infer which aspect(s) of the nominal concept is likely to be intended to be applied to its interpretation. These aspects are defined in terms of verbal and adjectival predicates. A section on the representation and processing of part-sentence verbal metaphor will therefore also serve as preparation for the representation of salient aspects of metaphorically used nouns. As the ability to process metaphorically used verbs and nouns facilitates the interpretation of more complex tropes, computational analysis of two other kinds of metaphorically based expressions are outlined: metaphoric compound nouns, such as "idea factory" and, together with the representation of inferences, modified metaphoric idioms, such as "Put the cat back into the bag".
This book is for developers who are looking for an overview of basic concepts in Natural Language Processing. It casts a wide net of techniques to help developers who have a range of technical backgrounds. Numerous code samples and listings are included to support myriad topics. The first chapter shows you various details of managing data that are relevant for NLP. The next pair of chapters contain NLP concepts, followed by another pair of chapters with Python code samples to illustrate those NLP concepts. Chapter 6 explores applications, e.g., sentiment analysis, recommender systems, COVID-19 analysis, spam detection, and a short discussion regarding chatbots. The final chapter presents the Transformer architecture, BERT-based models, and the GPT family of models, all of which were developed during the past three years and considered SOTA ("state of the art"). The appendices contain introductory material (including Python code samples) on regular expressions and probability/statistical concepts. Companion files with source code and figures are included. FEATURES: Covers extensive topics related to natural language processing Includes separate appendices on regular expressions and probability/statistics Features companion files with source code and figures from the book.
This book focuses on the next generation optical networks as well as mobile communication technologies. The reader will find chapters on Cognitive Optical Network, 5G Cognitive Wireless, LTE, Data Analysis and Natural Language Processing. It also presents a comprehensive view of the enhancements and requirements foreseen for Machine Type Communication. Moreover, some data analysis techniques and Brazilian Portuguese natural language processing technologies are also described here.
This book draws on the recent remarkable advances in speech and language processing: advances that have moved speech technology beyond basic applications such as medical dictation and telephone self-service to increasingly sophisticated and clinically significant applications aimed at complex speech and language disorders. The book provides an introduction to the basic elements of speech and natural language processing technology, and illustrates their clinical potential by reviewing speech technology software currently in use for disorders such as autism and aphasia. The discussion is informed by the authors' own experiences in developing and investigating speech technology applications for these populations. Topics include detailed examples of speech and language technologies in both remediative and assistive applications, overviews of a number of current applications, and a checklist of criteria for selecting the most appropriate applications for particular user needs. This book will be of benefit to four audiences: application developers who are looking to apply these technologies; clinicians who are looking for software that may be of value to their clients; students of speech-language pathology and application development; and finally, people with speech and language disorders and their friends and family members.
This original volume describes the Spoken Language Translator (SLT), one of the first major automatic speech translation projects. The SLT system can translate between English, French, and Swedish in the domain of air travel planning, using a vocabulary of about 1500 words, and with an accuracy of about 75%. The authors detail the language processing components, largely built on top of the SRI Core Language Engine, using a combination of general grammars and techniques that allow them to be rapidly customized to specific domains. They base speech recognition on Hidden Markov Mode technology, and use versions of the SRI DECIPHER system. This account of SLT is an essential resource for researchers interested in knowing what is achievable in spoken-language translation today.
Universal codes efficiently compress sequences generated by stationary and ergodic sources with unknown statistics, and they were originally designed for lossless data compression. In the meantime, it was realized that they can be used for solving important problems of prediction and statistical analysis of time series, and this book describes recent results in this area. The first chapter introduces and describes the application of universal codes to prediction and the statistical analysis of time series; the second chapter describes applications of selected statistical methods to cryptography, including attacks on block ciphers; and the third chapter describes a homogeneity test used to determine authorship of literary texts. The book will be useful for researchers and advanced students in information theory, mathematical statistics, time-series analysis, and cryptography. It is assumed that the reader has some grounding in statistics and in information theory.
A practical introduction to essential topics at the core of computer science Automata, formal language, and complexity theory are central to the understanding of computer science. This book provides, in an accessible, practically oriented style, a thorough grounding in these topics for practitioners and students on all levels. Based on the authors’ belief that the problem-solving approach is the most effective, Problem Solving in Automata, Languages, and Complexity collects a rich variety of worked examples, questions, and exercises designed to ensure understanding and mastery of the subject matter. Building from the fundamentals for beginning engineers to more advanced concepts, the book examines the most common topics in the field, including:
Focused, practical, and versatile, Problem Solving in Automata, Languages, and Complexity gives students and engineers a solid grounding in essential areas in computer science.
This is the first volume that brings together research and practice from academic and industry settings and a combination of human and machine translation evaluation. Its comprehensive collection of papers by leading experts in human and machine translation quality and evaluation who situate current developments and chart future trends fills a clear gap in the literature. This is critical to the successful integration of translation technologies in the industry today, where the lines between human and machine are becoming increasingly blurred by technology: this affects the whole translation landscape, from students and trainers to project managers and professionals, including in-house and freelance translators, as well as, of course, translation scholars and researchers. The editors have broad experience in translation quality evaluation research, including investigations into professional practice with qualitative and quantitative studies, and the contributors are leading experts in their respective fields, providing a unique set of complementary perspectives on human and machine translation quality and evaluation, combining theoretical and applied approaches.
This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.
This book provides readers with a practical guide to the principles of hybrid approaches to natural language processing (NLP) involving a combination of neural methods and knowledge graphs. To this end, it first introduces the main building blocks and then describes how they can be integrated to support the effective implementation of real-world NLP applications. To illustrate the ideas described, the book also includes a comprehensive set of experiments and exercises involving different algorithms over a selection of domains and corpora in various NLP tasks. Throughout, the authors show how to leverage complementary representations stemming from the analysis of unstructured text corpora as well as the entities and relations described explicitly in a knowledge graph, how to integrate such representations, and how to use the resulting features to effectively solve NLP tasks in a range of domains. In addition, the book offers access to executable code with examples, exercises and real-world applications in key domains, like disinformation analysis and machine reading comprehension of scientific literature. All the examples and exercises proposed in the book are available as executable Jupyter notebooks in a GitHub repository. They are all ready to be run on Google Colaboratory or, if preferred, in a local environment. A valuable resource for anyone interested in the interplay between neural and knowledge-based approaches to NLP, this book is a useful guide for readers with a background in structured knowledge representations as well as those whose main approach to AI is fundamentally based on logic. Further, it will appeal to those whose main background is in the areas of machine and deep learning who are looking for ways to leverage structured knowledge bases to optimize results along the NLP downstream.
Questions related to language acquisition have been of interest for many centuries, as children seem to acquire a sophisticated capacity for processing language with apparent ease, in the face of ambiguity, noise and uncertainty. However, with recent advances in technology and cognitive-related research it is now possible to conduct large-scale computational investigations of these issues The book discusses some of the latest theoretical and practical developments in the areas involved, including computational models for language tasks, tools and resources that help to approximate the linguistic environment available to children during acquisition, and discussions of challenging aspects of language that children have to master. This is a much-needed collection that provides a cross-section of recent multidisciplinary research on the computational modeling of language acquisition. It is targeted at anyone interested in the relevance of computational techniques for understanding language acquisition. Readers of this book will be introduced to some of the latest approaches to these tasks including: * Models of acquisition of various types of linguistic information (from words to syntax and semantics) and their relevance to research on human language acquisition * Analysis of linguistic and contextual factors that influence acquisition * Resources and tools for investigating these tasks Each chapter is presented in a self-contained manner, providing a detailed description of the relevant aspects related to research on language acquisition, and includes illustrations and tables to complement these in-depth discussions. Though there are no formal prerequisites, some familiarity with the basic concepts of human and computational language acquisition is beneficial.
Parsing with Principles and Classes of Information presents a parser based on current principle-based linguistic theories for English. It argues that differences in the kind of information being computed, whether lexical, structural or syntactic, play a crucial role in the mapping from grammatical theory to parsing algorithms. The direct encoding of homogeneous classes of information has computational and cognitive advantages, which are discussed in detail. Phrase structure is built by using a fast algorithm and compact reference tables. A quantified comparison of different compilation methods shows that lexical and structural information are most compactly represented by separate tables. This finding is reconciled to evidence on the resolution of lexical ambiguity, as an approach to the modularization of information. The same design is applied to the efficient computation of long- distance dependencies. Incremental parsing using bottom-up tabular algorithms is discussed in detail. Finally, locality restrictions are calculated by a parametric algorithm. Students of linguistics, parsing and psycholinguistics will find this book a useful resource on issues related to the implementation of current linguistic theories, using computational and cognitive plausible algorithms.
Although natural language processing has come far, the technology has not achieved a major impact on society. Is this because of some fundamental limitation that cannot be overcome? Or because there has not been enough time to refine and apply theoretical work already done? Editors Madeleine Bates and Ralph Weischedel believe it is neither; they feel that several critical issues have never been adequately addressed in either theoretical or applied work, and they have invited capable researchers in the field to do that in Challenges in Natural Language Processing. This volume will be of interest to researchers of computational linguistics in academic and non-academic settings and to graduate students in computational linguistics, artificial intelligence and linguistics.
Written by leading international experts, this volume presents contributions establishing the feasibility of human language-like communication with robots. The book explores the use of language games for structuring situated dialogues in which contextualized language communication and language acquisition can take place. Within the text are integrated experiments demonstrating the extensive research which targets artificial language evolution. Language Grounding in Robots uses the design layers necessary to create a fully operational communicating robot as a framework for the text, focusing on the following areas: Embodiment; Behavior; Perception and Action; Conceptualization; Language Processing; Whole Systems Experiments. This book serves as an excellent reference for researchers interested in further study of artificial language evolution.
This book takes concepts developed by researchers in theoretical computer science and adapts and applies them to the study of natural language meaning. Summarizing more than a decade of research, Chris Barker and Chung-chieh Shan put forward the Continuation Hypothesis: that the meaning of a natural language expression can depend on its own continuation. In Part I, the authors develop a continuation-based theory of scope and quantificational binding and provide an explanation for order sensitivity in scope-related phenomena such as scope ambiguity, crossover, superiority, reconstruction, negative polarity licensing, dynamic anaphora, and donkey anaphora. Part II outlines an innovative substructural logic for reasoning about continuations and proposes an analysis of the compositional semantics of adjectives such as 'same' in terms of parasitic and recursive scope. It also shows that certain cases of ellipsis should be treated as anaphora to a continuation, leading to a new explanation for a subtype of sluicing known as sprouting. The book makes a significant contribution to work on scope, reference, quantification, and other central aspects of semantics and will appeal to semanticists in linguistics and philosophy at graduate level and above.
Computational Psycholinguistics: An Interdisciplinary Approach to the Study of Language investigates the architecture and mechanisms which underlie the human capacity to process language. It is the first such study to integrate modern syntactic theory, cross-linguistic psychological evidence, and modern computational techniques in constructing a model of the human sentence processing mechanism. The monograph follows the rationalist tradition, arguing the central role of modularity and universal grammar in a theory of human linguistic performance. It refines the notion of `modularity of mind', and presents a distributed model of syntactic processing which consists of modules aligned with the various informational `types' associated with modern linguistic theories. By considering psycholinguistic evidence from a range of languages, a small number of processing principles are motivated and are demonstrated to hold universally. It is also argued that the behavior of modules, and the strategies operative within them, can be derived from an overarching `Principle of Incremental Comprehension'. Audience: The book is recommended to all linguists, psycholinguists, computational linguists, and others interested in a unified and interdisciplinary study of the human language faculty.
Peer reviewed articles from the Natural Language Processing and Cognitive Science (NLPCS) 2014 meeting in October 2014 workshop. The meeting fosters interactions among researchers and practitioners in NLP by taking a Cognitive Science perspective. Articles cover topics such as artificial intelligence, computational linguistics, psycholinguistics, cognitive psychology and language learning.
This is the first monograph on the emerging area of linguistic linked data. Presenting a combination of background information on linguistic linked data and concrete implementation advice, it introduces and discusses the main benefits of applying linked data (LD) principles to the representation and publication of linguistic resources, arguing that LD does not look at a single resource in isolation but seeks to create a large network of resources that can be used together and uniformly, and so making more of the single resource. The book describes how the LD principles can be applied to modelling language resources. The first part provides the foundation for understanding the remainder of the book, introducing the data models, ontology and query languages used as the basis of the Semantic Web and LD and offering a more detailed overview of the Linguistic Linked Data Cloud. The second part of the book focuses on modelling language resources using LD principles, describing how to model lexical resources using Ontolex-lemon, the lexicon model for ontologies, and how to annotate and address elements of text represented in RDF. It also demonstrates how to model annotations, and how to capture the metadata of language resources. Further, it includes a chapter on representing linguistic categories. In the third part of the book, the authors describe how language resources can be transformed into LD and how links can be inferred and added to the data to increase connectivity and linking between different datasets. They also discuss using LD resources for natural language processing. The last part describes concrete applications of the technologies: representing and linking multilingual wordnets, applications in digital humanities and the discovery of language resources. Given its scope, the book is relevant for researchers and graduate students interested in topics at the crossroads of natural language processing / computational linguistics and the Semantic Web / linked data. It appeals to Semantic Web experts who are not proficient in applying the Semantic Web and LD principles to linguistic data, as well as to computational linguists who are used to working with lexical and linguistic resources wanting to learn about a new paradigm for modelling, publishing and exploiting linguistic resources.
In the past few decades the use of increasingly large text corpora
has grown rapidly in language and linguistics research. This was
enabled by remarkable strides in natural language processing (NLP)
technology, technology that enables computers to automatically and
efficiently process, annotate and analyze large amounts of spoken
and written text in linguistically and/or pragmatically meaningful
ways. It has become more desirable than ever before for language
and linguistics researchers who use corpora in their research to
gain an adequate understanding of the relevant NLP technology to
take full advantage of its capabilities.
NVIDIA's Full-Color Guide to Deep Learning: All You Need to Get Started and Get Results "To enable everyone to be part of this historic revolution requires the democratization of AI knowledge and resources. This book is timely and relevant towards accomplishing these lofty goals." -- From the foreword by Dr. Anima Anandkumar, Bren Professor, Caltech, and Director of ML Research, NVIDIA "Ekman uses a learning technique that in our experience has proven pivotal to success-asking the reader to think about using DL techniques in practice. His straightforward approach is refreshing, and he permits the reader to dream, just a bit, about where DL may yet take us." -- From the foreword by Dr. Craig Clawson, Director, NVIDIA Deep Learning Institute Deep learning (DL) is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience. After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Magnus Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains how a natural language translator and a system generating natural language descriptions of images. Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning. Explore and master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation See how DL frameworks make it easier to develop more complicated and useful neural networks Discover how convolutional neural networks (CNNs) revolutionize image classification and analysis Apply recurrent neural networks (RNNs) and long short-term memory (LSTM) to text and other variable-length sequences Master NLP with sequence-to-sequence networks and the Transformer architecture Build applications for natural language translation and image captioning NVIDIA's invention of the GPU sparked the PC gaming market. The company's pioneering work in accelerated computing--a supercharged form of computing at the intersection of computer graphics, high-performance computing, and AI--is reshaping trillion-dollar industries, such as transportation, healthcare, and manufacturing, and fueling the growth of many others. Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
Recent Advances in Example-Based Machine Translation is of relevance to researchers and program developers in the field of Machine Translation and especially Example-Based Machine Translation, bilingual text processing and cross-linguistic information retrieval. It is also of interest to translation technologists and localisation professionals. Recent Advances in Example-Based Machine Translation fills a void, because it is the first book to tackle the issue of EBMT in depth. It gives a state-of-the-art overview of EBMT techniques and provides a coherent structure in which all aspects of EBMT are embedded. Its contributions are written by long-standing researchers in the field of MT in general, and EBMT in particular. This book can be used in graduate-level courses in machine translation and statistical NLP.
The series serves to propagate investigations into language usage, especially with respect to computational support. This includes all forms of text handling activity, not only interlingual translations, but also conversions carried out in response to different communicative tasks. Among the major topics are problems of text transfer and the interplay between human and machine activities.
Two Top Industry Leaders Speak Out Judith Markowitz When Amy asked me to co-author the foreword to her new book on advances in speech recognition, I was honored. Amy's work has always been infused with c- ative intensity, so I knew the book would be as interesting for established speech professionals as for readers new to the speech-processing industry. The fact that I would be writing the foreward with Bill Scholz made the job even more enjoyable. Bill and I have known each other since he was at UNISYS directing projects that had a profound impact on speech-recognition tools and applications. Bill Scholz The opportunity to prepare this foreword with Judith provides me with a rare oppor- nity to collaborate with a seasoned speech professional to identify numerous signi- cant contributions to the field offered by the contributors whom Amy has recruited. Judith and I have had our eyes opened by the ideas and analyses offered by this collection of authors. Speech recognition no longer needs be relegated to the ca- gory of an experimental future technology; it is here today with sufficient capability to address the most challenging of tasks. And the point-click-type approach to GUI control is no longer sufficient, especially in the context of limitations of mode- day hand held devices. Instead, VUI and GUI are being integrated into unified multimodal solutions that are maturing into the fundamental paradigm for comput- human interaction in the future.
This book is about machine translation (MT) and the classic problems associated with this language technology. It examines the causes of these problems and, for linguistic, rule-based systems, attributes the cause to language's ambiguity and complexity and their interplay in logic-driven processes. For non-linguistic, data-driven systems, the book attributes translation shortcomings to the very lack of linguistics. It then proposes a demonstrable way to relieve these drawbacks in the shape of a working translation model (Logos Model) that has taken its inspiration from key assumptions about psycholinguistic and neurolinguistic function. The book suggests that this brain-based mechanism is effective precisely because it bridges both linguistically driven and data-driven methodologies. It shows how simulation of this cerebral mechanism has freed this one MT model from the all-important, classic problem of complexity when coping with the ambiguities of language. Logos Model accomplishes this by a data-driven process that does not sacrifice linguistic knowledge, but that, like the brain, integrates linguistics within a data-driven process. As a consequence, the book suggests that the brain-like mechanism embedded in this model has the potential to contribute to further advances in machine translation in all its technological instantiations. |
You may like...
Game Change - Obama and the Clintons…
John Heilemann, Mark Halperin
Paperback
Bologna-Raticosa - A Story of Men and…
Carlo Dolcini, Francesco Amante
Hardcover
R1,232
Discovery Miles 12 320
|