![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
It is now well established that phonological -- and orthographic --
codes play a crucial role in the recognition of isolated words and
in understanding the sequences of words that comprise a sentence.
However, words and sentences are organized with respect to
morphological as well as phonological components. It is thus
unfortunate that the morpheme has received relatively little
attention in the experimental literature, either from psychologists
or linguists. Due to recent methodological developments, however,
now is an opportune time to address morphological issues.
This book focuses mainly on logical approaches to computational linguistics, but also discusses integrations with other approaches, presenting both classic and newly emerging theories and applications.Decades of research on theoretical work and practical applications have demonstrated that computational linguistics is a distinctively interdisciplinary area. There is convincing evidence that computational approaches to linguistics can benefit from research on the nature of human language, including from the perspective of its evolution. This book addresses various topics in computational theories of human language, covering grammar, syntax, and semantics. The common thread running through the research presented is the role of computer science, mathematical logic and other subjects of mathematics in computational linguistics and natural language processing (NLP). Promoting intelligent approaches to artificial intelligence (AI) and NLP, the book is intended for researchers and graduate students in the field.
One of the liveliest forums for sharing psychological, linguistic,
philosophical, and computer science perspectives on
psycholinguistics has been the annual meeting of the CUNY Sentence
Processing Conference. Documenting the state of the art in several
important approaches to sentence processing, this volume consists
of selected papers that had been presented at the Sixth CUNY
Conference. The editors not only present the main themes that ran
through the conference but also honor the breadth of the
presentations from disciplines including linguistics, experimental
psychology, and computer science. The variety of sentence
processing topics examined includes:
This comprehensive reference work provides an overview of the concepts, methodologies, and applications in computational linguistics and natural language processing (NLP). * Features contributions by the top researchers in the field, reflecting the work that is driving the discipline forward * Includes an introduction to the major theoretical issues in these fields, as well as the central engineering applications that the work has produced * Presents the major developments in an accessible way, explaining the close connection between scientific understanding of the computational properties of natural language and the creation of effective language technologies * Serves as an invaluable state-of-the-art reference source for computational linguists and software engineers developing NLP applications in industrial research and development labs of software companies
"Computers in Translation" is a comprehensive guide to the practical issues surrounding machine translation and computer-based translation tools. Translators, system designers, system operators and researchers present the facts about machine translation: its history, its successes, its limitations and its potential. Three chapters deal with actual machine translation applications, discussing installations including the METEO system, used in Canada to translate weather forecasts and weather reports, and the system used in the Foreign Technology Division of the US Air Force. This book should be of interest to academics and postgraduates studying translation studies, language and linguistics, and to technical publications managers, translators and technical authors.
The symposium on which this volume was based brought together approximately fifty scientists from a variety of backgrounds to discuss the rapidly-emerging set of competing technologies for exploiting a massive quantity of textual information. This group was challenged to explore new ways to take advantage of the power of on-line text. A billion words of text can be more generally useful than a few hundred logical rules, if advanced computation can extract useful information from streams of text and help find what is needed in the sea of available material. While the extraction task is a hot topic for the field of natural language processing and the retrieval task is a solid aspect in the field of information retrieval, these two disciplines came together at the symposium and have been cross-breeding more than ever. The book is organized in three parts. The first group of papers describes the current set of natural language processing techniques used for interpreting and extracting information from quantities of text. The second group gives some of the historical perspective, methodology, and current practice of information retrieval work; the third covers both current and emerging applications of these techniques. This collection of readings should give students and scientists alike a good idea of the current techniques as well as a general concept of how to go about developing and testing systems to handle volumes of text.
Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book "Modelling, Learning and Processing of Text-Technological Data Structures" deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.
Examines various speech technologies deployed in healthcare service robots to maximize the robot's ability to interpret user input. Demonstrates how robot anthropomorphic features and etiquette in behavior promotes user-positive emotions, acceptance of robots, and compliance with robot requests. Analyzes how multimodal medical-service robots and other cyber-physical systems can reduce mistakes and mishaps in the operating room. Evaluates various input methods for improving acceptance of robots in the older adult population. Presents case studies of cognitively and socially engaging robots in the long-term care setting for helping older adults with activities of daily living and in the pediatric setting for helping children with autism spectrum conditions and metabolic disorders. Speech and Automata in Health Care forges new ground by closely analyzing how three separate disciplines - speech technology, robotics, and medical/surgical/assistive care - intersect with one another, resulting in an innovative way of diagnosing and treating both juvenile and adult illnesses and conditions. This includes the use of speech-enabled robotics to help the elderly population cope with common problems associated with aging caused by the diminution in their sensory, auditory and motor capabilities. By examining the emerging nexus of speech, automata, and health care, the authors demonstrate the exciting potential of automata, both speech-driven and multimodal, to affect the healthcare delivery system so that it better meets the needs of the populations it serves. This book provides both empirical research findings and incisive literature reviews that demonstrate some of the more novel uses of speech-enabled and multimodal automata in the operating room, hospital ward, long-term care facility, and in the home. Studies backed by major universities, research institutes, and by EU-funded collaborative projects are debuted in this volume. This volume provides a wealth of timely material for industrial engineers, speech scientists, computational linguists, and for signal processing and intelligent systems design experts. Topics include: Spoken Interaction with Healthcare Robots Service Robot Feature Effects on Patient Acceptance/Emotional Response Designing Embodied and Virtual Agents for the Operating Room The Emerging Role of Robotics for Personal Health Management in the Older-Adult Population Why Input Methods for Robots that Serve the Older Adult Are Critical for Usability Socially and Cognitively Engaging Robots in the Long-Term Care Setting Voice-Enabled Assistive Robots for Managing Autism Spectrum Conditions ASR and TTS for Voice-Controlled Robot Interactions in Treating Children with Metabolic Disorders
Accompanying continued industrial production and sales of artificial intelligence and expert systems is the risk that difficult and resistant theoretical problems and issues will be ignored. The participants at the Third Tinlap Workshop, whose contributions are contained in Theoretical Issues in Natural Language Processing, remove that risk. They discuss and promote theoretical research on natural language processing, examinations of solutions to current problems, development of new theories, and representations of published literature on the subject. Discussions among these theoreticians in artificial intelligence, logic, psychology, philosophy, and linguistics draw a comprehensive, up-to-date picture of the natural language processing field.
Research into Natural Language Processing - the use of computers to process language - has developed over the last couple of decades into one of the most vigorous and interesting areas of current work on language and communication. This book introduces the subject through the discussion and development of various computer programs which illustrate some of the basic concepts and techniques in the field. The programming language used is Prolog, which is especially well-suited for Natural Language Processing and those with little or no background in computing. Following the general introduction, the first section of the book presents Prolog, and the following chapters illustrate how various Natural Language Processing programs may be written using this programming language. Since it is assumed that the reader has no previous experience in programming, great care is taken to provide a simple yet comprehensive introduction to Prolog. Due to the 'user friendly' nature of Prolog, simple yet effective programs may be written from an early stage. The reader is gradually introduced to various techniques for syntactic processing, ranging from Finite State Network recognisors to Chart parsers. An integral element of the book is the comprehensive set of exercises included in each chapter as a means of cementing the reader's understanding of each topic. Suggested answers are also provided. An Introduction to Natural Language Processing Through Prolog is an excellent introduction to the subject for students of linguistics and computer science, and will be especially useful for those with no background in the subject.
This book explains how to build Natural Language Generation (NLG) systems--computer software systems that automatically generate understandable texts in English or other human languages. NLG systems use knowledge about language and the application domain to automatically produce documents, reports, explanations, help messages, and other kinds of texts. The book covers the algorithms and representations needed to perform the core tasks of document planning, microplanning, and surface realization, using a case study to show how these components fit together. It is essential reading for researchers interested in NLP, AI, and HCI; and for developers interested in advanced document-creation technology.
Recognizing that the generation of natural language is a goal-
driven process, where many of the goals are pragmatic (i.e.,
interpersonal and situational) in nature, this book provides an
overview of the role of pragmatics in language generation.
Originally published in 1992, when connectionist natural language processing (CNLP) was a new and burgeoning research area, this book represented a timely assessment of the state of the art in the field. It includes contributions from some of the best known researchers in CNLP and covers a wide range of topics. The book comprises four main sections dealing with connectionist approaches to semantics, syntax, the debate on representational adequacy, and connectionist models of psycholinguistic processes. The semantics and syntax sections deal with a variety of approaches to issues in these traditional linguistic domains, covering the spectrum from pure connectionist approaches to hybrid models employing a mixture of connectionist and classical AI techniques. The debate on the fundamental suitability of connectionist architectures for dealing with natural language processing is the focus of the section on representational adequacy. The chapters in this section represent a range of positions on the issue, from the view that connectionist models are intrinsically unsuitable for all but the associationistic aspects of natural language, to the other extreme which holds that the classical conception of representation can be dispensed with altogether. The final section of the book focuses on the application of connectionist models to the study of psycholinguistic processes. This section is perhaps the most varied, covering topics from speech perception and speech production, to attentional deficits in reading. An introduction is provided at the beginning of each section which highlights the main issues relating to the section topic and puts the constituent chapters into a wider context.
This book is a description of some of the most recent advances in text classification as part of a concerted effort to achieve computer understanding of human language. In particular, it addresses state-of-the-art developments in the computation of higher-level linguistic features, ranging from etymology to grammar and syntax for the practical task of text classification according to genres, registers and subject domains. Serving as a bridge between computational methods and sophisticated linguistic analysis, this book will be of particular interest to academics and students of computational linguistics as well as professionals in natural language engineering.
This volume examines the concept of falsification as a central notion of semantic theories and its effects on logical laws. The point of departure is the general constructivist line of argument that Michael Dummett has offered over the last decades. From there, the author examines the ways in which falsifications can enter into a constructivist semantics, displays the full spectrum of options, and discusses the logical systems most suitable to each one of them. While the idea of introducing falsifications into the semantic account is Dummett's own, the many ways in which falsificationism departs quite radically from verificationism are here spelled out in detail for the first time. The volume is divided into three large parts. The first part provides important background information about Dummett s program, intuitionism and logics with gaps and gluts. The second part is devoted to the introduction of falsifications into the constructive account and shows that there is more than one way in which one can do this. The third part details the logical effects of these various moves. In the end, the book shows that the constructive path may branch in different directions: towards intuitionistic logic, dual intuitionistic logic and several variations of Nelson logics. The author argues that, on balance, the latter are the more promising routes to take. "Kapsner s book is the first detailed investigation of how to incorporate the notion of falsification into formal logic. This is a fascinating logico-philosophical investigation, which will interest non-classical logicians of all stripes." Graham Priest, "Graduate Center, City University of New York" and "University of Melbourne""
This open access book describes the results of natural language processing and machine learning methods applied to clinical text from electronic patient records. It is divided into twelve chapters. Chapters 1-4 discuss the history and background of the original paper-based patient records, their purpose, and how they are written and structured. These initial chapters do not require any technical or medical background knowledge. The remaining eight chapters are more technical in nature and describe various medical classifications and terminologies such as ICD diagnosis codes, SNOMED CT, MeSH, UMLS, and ATC. Chapters 5-10 cover basic tools for natural language processing and information retrieval, and how to apply them to clinical text. The difference between rule-based and machine learning-based methods, as well as between supervised and unsupervised machine learning methods, are also explained. Next, ethical concerns regarding the use of sensitive patient records for research purposes are discussed, including methods for de-identifying electronic patient records and safely storing patient records. The book's closing chapters present a number of applications in clinical text mining and summarise the lessons learned from the previous chapters. The book provides a comprehensive overview of technical issues arising in clinical text mining, and offers a valuable guide for advanced students in health informatics, computational linguistics, and information retrieval, and for researchers entering these fields.
This book covers deep-learning-based approaches for sentiment analysis, a relatively new, but fast-growing research area, which has significantly changed in the past few years. The book presents a collection of state-of-the-art approaches, focusing on the best-performing, cutting-edge solutions for the most common and difficult challenges faced in sentiment analysis research. Providing detailed explanations of the methodologies, the book is a valuable resource for researchers as well as newcomers to the field.
This book proposes a new model for the translation-oriented analysis of multimodal source texts. The author guides the reader through semiotics, multimodality, pragmatics and translation studies on a quest for the meaning-making mechanics of texts that combine images and words. She openly challenges the traditional view that sees translators focusing their attention mostly on the linguistic aspect of source material in their work. The central theoretical pivot around which the analytical model revolves is that multimodal texts communicate through individual images and linguistic units, as well as through the interaction among textual resources and the text's interaction with its context of reference. This three-dimensional view offers a holistic understanding of multimodal texts and their potential translation issues to help translators improve the way they communicate multimodally across languages and cultures. This book will appeal to researchers in the fields of translation studies, multimodality and pragmatics.
A practical guide to the construction of thesauri for use in information retrieval. In recent years, new applications for thesauri have been emerging, for example, in front-end systems, cross-database searching, hypertext systems, expert systems and in natural-language processing. In-house thesauri are still needed for internal special collections. The fourth edition of this work has been fully revised and the bibliography much extended, in particular, to include web addresses.
This book discusses some of the basic issues relating to corpus generation and the methods normally used to generate a corpus. Since corpus-related research goes beyond corpus generation, the book also addresses other major topics connected with the use and application of language corpora, namely, corpus readiness in the context of corpus sanitation and pre-editing of corpus texts; the application of statistical methods; and various text processing techniques. Importantly, it explores how corpora can be used as a primary or secondary resource in English language teaching, in creating dictionaries, in word sense disambiguation, in various language technologies, and in other branches of linguistics. Lastly, the book sheds light on the status quo of corpus generation in Indian languages and identifies current and future needs. Discussing various technical issues in the field in a lucid manner, providing extensive new diagrams and charts for easy comprehension, and using simplified English, the book is an ideal resource for non-native English readers. Written by academics with many years of experience teaching and researching corpus linguistics, its focus on Indian languages and on English corpora makes it applicable to graduate and postgraduate students of applied linguistics, computational linguistics and language processing in South Asia and across countries where English is spoken as a first or second language.
Graph theory and the fields of natural language processing and information retrieval are well-studied disciplines. Traditionally, these areas have been perceived as distinct, with different algorithms, different applications, and different potential end-users. However, recent research has shown that these disciplines are intimately connected, with a large variety of natural language processing and information retrieval applications finding efficient solutions within graph-theoretical frameworks. This book extensively covers the use of graph-based algorithms for natural language processing and information retrieval. It brings together topics as diverse as lexical semantics, text summarization, text mining, ontology construction, text classification, and information retrieval, which are connected by the common underlying theme of the use of graph-theoretical methods for text and information processing tasks. Readers will come away with a firm understanding of the major methods and applications in natural language processing and information retrieval that rely on graph-based representations and algorithms.
Explores the direct relation of modern CALL (Computer-Assisted Language Learning) to aspects of natural language processing for theoretical and practical applications, and worldwide demand for formal language education and training that focuses on restricted or specialized professional domains. Unique in its broad-based, state-of-the-art, coverage of current knowledge and research in the interrelated fields of computer-based learning and teaching and processing of specialized linguistic domains. The articles in this book offer insights on or analyses of the current state and future directions of many recent key concepts regarding the application of computers to natural languages, such as: authenticity, personalization, normalization, evaluation. Other articles present fundamental research on major techniques, strategies and methodologies that are currently the focus of international language research projects, both of a theoretical and an applied nature.
|
You may like...
Managing (e)Business Transformation - A…
Ali Farhoomand, M. Lynne Markus, …
Hardcover
Business Transformations in the Era of…
Karim Mezghani, Wassim Aloulou
Hardcover
R5,333
Discovery Miles 53 330
|