![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
One of the liveliest forums for sharing psychological, linguistic,
philosophical, and computer science perspectives on
psycholinguistics has been the annual meeting of the CUNY Sentence
Processing Conference. Documenting the state of the art in several
important approaches to sentence processing, this volume consists
of selected papers that had been presented at the Sixth CUNY
Conference. The editors not only present the main themes that ran
through the conference but also honor the breadth of the
presentations from disciplines including linguistics, experimental
psychology, and computer science. The variety of sentence
processing topics examined includes:
"Computers in Translation" is a comprehensive guide to the practical issues surrounding machine translation and computer-based translation tools. Translators, system designers, system operators and researchers present the facts about machine translation: its history, its successes, its limitations and its potential. Three chapters deal with actual machine translation applications, discussing installations including the METEO system, used in Canada to translate weather forecasts and weather reports, and the system used in the Foreign Technology Division of the US Air Force. This book should be of interest to academics and postgraduates studying translation studies, language and linguistics, and to technical publications managers, translators and technical authors.
The symposium on which this volume was based brought together approximately fifty scientists from a variety of backgrounds to discuss the rapidly-emerging set of competing technologies for exploiting a massive quantity of textual information. This group was challenged to explore new ways to take advantage of the power of on-line text. A billion words of text can be more generally useful than a few hundred logical rules, if advanced computation can extract useful information from streams of text and help find what is needed in the sea of available material. While the extraction task is a hot topic for the field of natural language processing and the retrieval task is a solid aspect in the field of information retrieval, these two disciplines came together at the symposium and have been cross-breeding more than ever. The book is organized in three parts. The first group of papers describes the current set of natural language processing techniques used for interpreting and extracting information from quantities of text. The second group gives some of the historical perspective, methodology, and current practice of information retrieval work; the third covers both current and emerging applications of these techniques. This collection of readings should give students and scientists alike a good idea of the current techniques as well as a general concept of how to go about developing and testing systems to handle volumes of text.
Originally published in 1992, when connectionist natural language processing (CNLP) was a new and burgeoning research area, this book represented a timely assessment of the state of the art in the field. It includes contributions from some of the best known researchers in CNLP and covers a wide range of topics. The book comprises four main sections dealing with connectionist approaches to semantics, syntax, the debate on representational adequacy, and connectionist models of psycholinguistic processes. The semantics and syntax sections deal with a variety of approaches to issues in these traditional linguistic domains, covering the spectrum from pure connectionist approaches to hybrid models employing a mixture of connectionist and classical AI techniques. The debate on the fundamental suitability of connectionist architectures for dealing with natural language processing is the focus of the section on representational adequacy. The chapters in this section represent a range of positions on the issue, from the view that connectionist models are intrinsically unsuitable for all but the associationistic aspects of natural language, to the other extreme which holds that the classical conception of representation can be dispensed with altogether. The final section of the book focuses on the application of connectionist models to the study of psycholinguistic processes. This section is perhaps the most varied, covering topics from speech perception and speech production, to attentional deficits in reading. An introduction is provided at the beginning of each section which highlights the main issues relating to the section topic and puts the constituent chapters into a wider context.
Recognizing that the generation of natural language is a goal-
driven process, where many of the goals are pragmatic (i.e.,
interpersonal and situational) in nature, this book provides an
overview of the role of pragmatics in language generation.
This book provides an overview of how comparable corpora can be used to overcome the lack of parallel resources when building machine translation systems for under-resourced languages and domains. It presents a wealth of methods and open tools for building comparable corpora from the Web, evaluating comparability and extracting parallel data that can be used for the machine translation task. It is divided into several sections, each covering a specific task such as building, processing, and using comparable corpora, focusing particularly on under-resourced language pairs and domains. The book is intended for anyone interested in data-driven machine translation for under-resourced languages and domains, especially for developers of machine translation systems, computational linguists and language workers. It offers a valuable resource for specialists and students in natural language processing, machine translation, corpus linguistics and computer-assisted translation, and promotes the broader use of comparable corpora in natural language processing and computational linguistics.
This comprehensive reference work provides an overview of the concepts, methodologies, and applications in computational linguistics and natural language processing (NLP). * Features contributions by the top researchers in the field, reflecting the work that is driving the discipline forward * Includes an introduction to the major theoretical issues in these fields, as well as the central engineering applications that the work has produced * Presents the major developments in an accessible way, explaining the close connection between scientific understanding of the computational properties of natural language and the creation of effective language technologies * Serves as an invaluable state-of-the-art reference source for computational linguists and software engineers developing NLP applications in industrial research and development labs of software companies
This book provides readers with a practical guide to the principles of hybrid approaches to natural language processing (NLP) involving a combination of neural methods and knowledge graphs. To this end, it first introduces the main building blocks and then describes how they can be integrated to support the effective implementation of real-world NLP applications. To illustrate the ideas described, the book also includes a comprehensive set of experiments and exercises involving different algorithms over a selection of domains and corpora in various NLP tasks. Throughout, the authors show how to leverage complementary representations stemming from the analysis of unstructured text corpora as well as the entities and relations described explicitly in a knowledge graph, how to integrate such representations, and how to use the resulting features to effectively solve NLP tasks in a range of domains. In addition, the book offers access to executable code with examples, exercises and real-world applications in key domains, like disinformation analysis and machine reading comprehension of scientific literature. All the examples and exercises proposed in the book are available as executable Jupyter notebooks in a GitHub repository. They are all ready to be run on Google Colaboratory or, if preferred, in a local environment. A valuable resource for anyone interested in the interplay between neural and knowledge-based approaches to NLP, this book is a useful guide for readers with a background in structured knowledge representations as well as those whose main approach to AI is fundamentally based on logic. Further, it will appeal to those whose main background is in the areas of machine and deep learning who are looking for ways to leverage structured knowledge bases to optimize results along the NLP downstream.
This book covers theoretical work, applications, approaches, and techniques for computational models of information and its presentation by language (artificial, human, or natural in other ways). Computational and technological developments that incorporate natural language are proliferating. Adequate coverage encounters difficult problems related to ambiguities and dependency on context and agents (humans or computational systems). The goal is to promote computational systems of intelligent natural language processing and related models of computation, language, thought, mental states, reasoning, and other cognitive processes.
This book focuses on dialog from a varied combination of fields: Linguistics, Philosophy of Language and Computation. It builds on the hypothesis that meaning in human communication arises at the discourse level rather than at the word level. The book offers a complex analytical framework and integration of the central areas of research around human communication. The content revolves around meaning but it also gives evidence of the connection among different points of view. Besides discussing issues of general interest to the field, the book triggers theoretical argumentation that is currently under scientific discussion. It examines such topics as immanent reasoning joined with Recanati's lekta and free enrichment, challenges of internet conversation, inner dialogs, cognition and language, and the relation between assertion and denial. It proposes a dialogical framework for intra-negotiation and gives a geolinguistic perspective on spoken discourse. Finally, it examines dialog and abduction and sheds light on a generation of dialog contexts by means of multimodal logic applied to speech acts.
This book contains a comprehensive treatment of advanced LaTeX features. The focus is on the development of high quality documents and presentations, by revealing powerful insights into the LaTeX language. The well-established advantages of the typesetting system LaTeX are the preparation and publication of platform-independent high-quality documents and automatic numbering and cross-referencing of illustrations or references. These can be extended beyond the typical applications, by creating highly dynamic electronic documents. This is commonly performed in connection with the portable document format (PDF), as well as other programming tools which allow the development of extremely flexible electronic documents.
In the global research community, English has become the main language of scholarly publishing in many disciplines. At the same time, online machine translation systems have become increasingly easy to access and use. Is this a researcher's match made in heaven, or the road to publication perdition? Here Lynne Bowker and Jairo Buitrago Ciro introduce the concept of machine translation literacy, a new kind of literacy for scholars and librarians in the digital age. For scholars, they explain how machine translation works, how it is (or could be) used for scholarly communication, and how both native and non-native English-speakers can write in a translation-friendly way in order to harness its potential. Native English speakers can continue to write in English, but expand the global reach of their research by making it easier for their peers around the world to access and understand their works, while non-native English speakers can write in their mother tongues, but leverage machine translation technology to help them produce draft publications in English. For academic librarians, the authors provide a framework for supporting researchers in all disciplines as they grapple with producing translation-friendly texts and using machine translation for scholarly communication-a form of support that will only become more important as campuses become increasingly international and as universities continue to strive to excel on the global stage. Machine Translation and Global Research is a must-read for scientists, researchers, students, and librarians eager to maximize the global reach and impact of any form of scholarly work.
A practical guide to the construction of thesauri for use in information retrieval. In recent years, new applications for thesauri have been emerging, for example, in front-end systems, cross-database searching, hypertext systems, expert systems and in natural-language processing. In-house thesauri are still needed for internal special collections. The fourth edition of this work has been fully revised and the bibliography much extended, in particular, to include web addresses.
This is the first monograph on the emerging area of linguistic linked data. Presenting a combination of background information on linguistic linked data and concrete implementation advice, it introduces and discusses the main benefits of applying linked data (LD) principles to the representation and publication of linguistic resources, arguing that LD does not look at a single resource in isolation but seeks to create a large network of resources that can be used together and uniformly, and so making more of the single resource. The book describes how the LD principles can be applied to modelling language resources. The first part provides the foundation for understanding the remainder of the book, introducing the data models, ontology and query languages used as the basis of the Semantic Web and LD and offering a more detailed overview of the Linguistic Linked Data Cloud. The second part of the book focuses on modelling language resources using LD principles, describing how to model lexical resources using Ontolex-lemon, the lexicon model for ontologies, and how to annotate and address elements of text represented in RDF. It also demonstrates how to model annotations, and how to capture the metadata of language resources. Further, it includes a chapter on representing linguistic categories. In the third part of the book, the authors describe how language resources can be transformed into LD and how links can be inferred and added to the data to increase connectivity and linking between different datasets. They also discuss using LD resources for natural language processing. The last part describes concrete applications of the technologies: representing and linking multilingual wordnets, applications in digital humanities and the discovery of language resources. Given its scope, the book is relevant for researchers and graduate students interested in topics at the crossroads of natural language processing / computational linguistics and the Semantic Web / linked data. It appeals to Semantic Web experts who are not proficient in applying the Semantic Web and LD principles to linguistic data, as well as to computational linguists who are used to working with lexical and linguistic resources wanting to learn about a new paradigm for modelling, publishing and exploiting linguistic resources.
When viewed through a political lens, the act of defining terms in natural language arguably transforms knowledge into values. This unique volume explores how corporate, military, academic, and professional values shaped efforts to define computer terminology and establish an information engineering profession as a precursor to what would become computer science. As the Cold War heated up, U.S. federal agencies increasingly funded university researchers and labs to develop technologies, like the computer, that would ensure that the U.S. maintained economic prosperity and military dominance over the Soviet Union. At the same time, private corporations saw opportunities for partnering with university labs and military agencies to generate profits as they strengthened their business positions in civilian sectors. They needed a common vocabulary and principles of streamlined communication to underpin the technology development that would ensure national prosperity and military dominance. investigates how language standardization contributed to the professionalization of computer science as separate from mathematics, electrical engineering, and physics examines traditions of language standardization in earlier eras of rapid technology development around electricity and radio highlights the importance of the analogy of "the computer is like a human" to early explanations of computer design and logic traces design and development of electronic computers within political and economic contexts foregrounds the importance of human relationships in decisions about computer design This in-depth humanistic study argues for the importance of natural language in shaping what people come to think of as possible and impossible relationships between computers and humans. The work is a key reference in the history of technology and serves as a source textbook on the human-level history of computing. In addition, it addresses those with interests in sociolinguistic questions around technology studies, as well as technology development at the nexus of politics, business, and human relations.
This book provides readers with a practical guide to the principles of hybrid approaches to natural language processing (NLP) involving a combination of neural methods and knowledge graphs. To this end, it first introduces the main building blocks and then describes how they can be integrated to support the effective implementation of real-world NLP applications. To illustrate the ideas described, the book also includes a comprehensive set of experiments and exercises involving different algorithms over a selection of domains and corpora in various NLP tasks. Throughout, the authors show how to leverage complementary representations stemming from the analysis of unstructured text corpora as well as the entities and relations described explicitly in a knowledge graph, how to integrate such representations, and how to use the resulting features to effectively solve NLP tasks in a range of domains. In addition, the book offers access to executable code with examples, exercises and real-world applications in key domains, like disinformation analysis and machine reading comprehension of scientific literature. All the examples and exercises proposed in the book are available as executable Jupyter notebooks in a GitHub repository. They are all ready to be run on Google Colaboratory or, if preferred, in a local environment. A valuable resource for anyone interested in the interplay between neural and knowledge-based approaches to NLP, this book is a useful guide for readers with a background in structured knowledge representations as well as those whose main approach to AI is fundamentally based on logic. Further, it will appeal to those whose main background is in the areas of machine and deep learning who are looking for ways to leverage structured knowledge bases to optimize results along the NLP downstream.
This book provides a new multi-method, process-oriented approach towards speech quality assessment, which allows readers to examine the influence of speech transmission quality on a variety of perceptual and cognitive processes in human listeners. Fundamental concepts and methodologies surrounding the topic of process-oriented quality assessment are introduced and discussed. The book further describes a functional process model of human quality perception, which theoretically integrates results obtained in three experimental studies. This book's conceptual ideas, empirical findings, and theoretical interpretations should be of particular interest to researchers working in the fields of Quality and Usability Engineering, Audio Engineering, Psychoacoustics, Audiology, and Psychophysiology.
This book investigates two major systems: firstly, co-operating distributed grammar systems, where the grammars work on one common sequential form and the co-operation is realized by the control of the sequence of active grammars; secondly, parallel communicating grammar systems, where each grammar works on its own sequential form and co-operation is done by means of communicating between grammars. The investigation concerns hierarchies with respect to different variants of co-operation, relations with classical formal language theory, syntactic parameters such as the number of components and their size, power of synchronization, and general notions generated from artificial intelligence.
This book constitutes the proceedings of the 19th International Symposium on Intelligent Data Analysis, IDA 2021, which was planned to take place in Porto, Portugal. Due to the COVID-19 pandemic the conference was held online during April 26-28, 2021.The 35 papers included in this book were carefully reviewed and selected from 113 submissions. The papers were organized in topical sections named: modeling with neural networks; modeling with statistical learning; modeling language and graphs; and modeling special data formats.
This book covers theoretical work, applications, approaches, and techniques for computational models of information and its presentation by language (artificial, human, or natural in other ways). Computational and technological developments that incorporate natural language are proliferating. Adequate coverage encounters difficult problems related to ambiguities and dependency on context and agents (humans or computational systems). The goal is to promote computational systems of intelligent natural language processing and related models of computation, language, thought, mental states, reasoning, and other cognitive processes.
Tackle a variety of tasks in natural language processing by learning how to use the R language and tidy data principles. This practical guide provides examples and resources to help you get up to speed with dplyr, broom, ggplot2, and other tidy tools from the R ecosystem. You'll discover how tidy data principles can make text mining easier, more effective, and consistent by employing tools already in wide use. Text Mining with R shows you how to manipulate, summarize, and visualize the characteristics of text, sentiment analysis, tf-idf, and topic modeling. Along with tidy data methods, you'll also examine several beginning-to-end tidy text analyses on data sources from Twitter to NASA datasets. These analyses bring together multiple text mining approaches covered in the book. Get real-world examples for implementing text mining using tidy R package Understand natural language processing concepts like sentiment analysis, tf-idf, and topic modeling Learn how to analyze unstructured, text-heavy data using R language and ecosystem
This book focuses on dialog from a varied combination of fields: Linguistics, Philosophy of Language and Computation. It builds on the hypothesis that meaning in human communication arises at the discourse level rather than at the word level. The book offers a complex analytical framework and integration of the central areas of research around human communication. The content revolves around meaning but it also gives evidence of the connection among different points of view. Besides discussing issues of general interest to the field, the book triggers theoretical argumentation that is currently under scientific discussion. It examines such topics as immanent reasoning joined with Recanati's lekta and free enrichment, challenges of internet conversation, inner dialogs, cognition and language, and the relation between assertion and denial. It proposes a dialogical framework for intra-negotiation and gives a geolinguistic perspective on spoken discourse. Finally, it examines dialog and abduction and sheds light on a generation of dialog contexts by means of multimodal logic applied to speech acts. |
![]() ![]() You may like...
Research on Translator and Interpreter…
Jackie Xiu Yan, Jun Pan, …
Hardcover
R3,471
Discovery Miles 34 710
Graph Learning and Network Science for…
Muskan Garg, Amit Kumar Gupta, …
Hardcover
R3,253
Discovery Miles 32 530
Deep Learning-Based Approaches for…
Basant Agarwal, Richi Nayak, …
Paperback
R4,751
Discovery Miles 47 510
Automated Machine Learning in Action
Qingquan Song, Haifeng Jin, …
Paperback
R1,051
Discovery Miles 10 510
Hands-on Question Answering Systems with…
Navin Sabharwal, Amit Agrawal
Paperback
Metalanguages for Dissecting Translation…
Rei Miyata, Masaru Yamada, …
Hardcover
R4,149
Discovery Miles 41 490
|