0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (5)
  • R250 - R500 (32)
  • R500+ (1,258)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation

Linguistic Linked Data - Representation, Generation and Applications (Hardcover, 1st ed. 2020): Philipp Cimiano, Christian... Linguistic Linked Data - Representation, Generation and Applications (Hardcover, 1st ed. 2020)
Philipp Cimiano, Christian Chiarcos, John P. Mccrae, Jorge Gracia
R4,329 Discovery Miles 43 290 Ships in 12 - 17 working days

This is the first monograph on the emerging area of linguistic linked data. Presenting a combination of background information on linguistic linked data and concrete implementation advice, it introduces and discusses the main benefits of applying linked data (LD) principles to the representation and publication of linguistic resources, arguing that LD does not look at a single resource in isolation but seeks to create a large network of resources that can be used together and uniformly, and so making more of the single resource. The book describes how the LD principles can be applied to modelling language resources. The first part provides the foundation for understanding the remainder of the book, introducing the data models, ontology and query languages used as the basis of the Semantic Web and LD and offering a more detailed overview of the Linguistic Linked Data Cloud. The second part of the book focuses on modelling language resources using LD principles, describing how to model lexical resources using Ontolex-lemon, the lexicon model for ontologies, and how to annotate and address elements of text represented in RDF. It also demonstrates how to model annotations, and how to capture the metadata of language resources. Further, it includes a chapter on representing linguistic categories. In the third part of the book, the authors describe how language resources can be transformed into LD and how links can be inferred and added to the data to increase connectivity and linking between different datasets. They also discuss using LD resources for natural language processing. The last part describes concrete applications of the technologies: representing and linking multilingual wordnets, applications in digital humanities and the discovery of language resources. Given its scope, the book is relevant for researchers and graduate students interested in topics at the crossroads of natural language processing / computational linguistics and the Semantic Web / linked data. It appeals to Semantic Web experts who are not proficient in applying the Semantic Web and LD principles to linguistic data, as well as to computational linguists who are used to working with lexical and linguistic resources wanting to learn about a new paradigm for modelling, publishing and exploiting linguistic resources.

Practical Spoken Dialog Systems (Hardcover, 2004 ed.): Deborah Dahl Practical Spoken Dialog Systems (Hardcover, 2004 ed.)
Deborah Dahl
R3,166 Discovery Miles 31 660 Ships in 10 - 15 working days

Spoken dialog systems allow people to get information, conduct business, and be entertained, simply by speaking to a computer. There are hundreds of these systems currently in use, handling millions of interactions every day. How do they work? What problems do they solve? The goal of this book is to answer these questions and others like them, including:

-How can I decide if a spoken dialog system is a good fit for the needs of my organization?
-Whata (TM)s the difference between a voice user interface and a conventional graphical interface? What are the psychological principles underlying voice user interfaces? What do I need to know about error handling in voice applications and accommodating both novice and experienced users?
-How can I make use of newer technologies like speaker authentication?
-What about development tools? How can I evaluate and select the right tools?
-What can I expect when deploying a spoken dialog system? How is deploying a spoken dialog system different from deploying a web application? What details do I have to be aware of for the deployment to succeed?
-What can we expect these systems to do in the future? What kinds of new capabilities are about to emerge from research laboratories?

For professional speech researchers, there is a rich technical literature covering many years of primary research in speech. However, this literature is not necessarily applicable to the needs of business people, application developers, and students who are interested in learning about the practical uses of speech technology. On the other hand, while existing introductory resources cover the basic mechanics of development of application developmentas well as aspects of the voice user interface, they dona (TM)t go far enough in dealing with the details that have to be taken into account to make spoken dialog systems successful in practice. Whata (TM)s missing is information in between the in-depth technical literature and the more introductory development resources. The goal of this book is to provide information for anyone who wants to take the next step beyond the basics of current speech applications but isna (TM)t yet ready to dive into the technical literature. It is hoped that this book will help project managers, application developers, and students gain a fuller and more complete understanding of spoken dialog technology and the practical aspects of developing and deploying spoken dialog applications.

Integration of World Knowledge for Natural Language Understanding (Hardcover, 2012): Ekaterina Ovchinnikova Integration of World Knowledge for Natural Language Understanding (Hardcover, 2012)
Ekaterina Ovchinnikova
R3,042 Discovery Miles 30 420 Ships in 10 - 15 working days

This book concerns non-linguistic knowledge required to perform computational natural language understanding (NLU). The main objective of the book is to show that inference-based NLU has the potential for practical large scale applications. First, an introduction to research areas relevant for NLU is given. We review approaches to linguistic meaning, explore knowledge resources, describe semantic parsers, and compare two main forms of inference: deduction and abduction. In the main part of the book, we propose an integrative knowledge base combining lexical-semantic, ontological, and distributional knowledge. A particular attention is payed to ensuring its consistency. We then design a reasoning procedure able to make use of the large scale knowledge base. We experiment both with a deduction-based NLU system and with an abductive reasoner. For evaluation, we use three different NLU tasks: recognizing textual entailment, semantic role labeling, and interpretation of noun dependencies.

Computational Methods for Corpus Annotation and Analysis (Hardcover, 2014): Xiaofei Lu Computational Methods for Corpus Annotation and Analysis (Hardcover, 2014)
Xiaofei Lu
R3,917 Discovery Miles 39 170 Ships in 12 - 17 working days

In the past few decades the use of increasingly large text corpora has grown rapidly in language and linguistics research. This was enabled by remarkable strides in natural language processing (NLP) technology, technology that enables computers to automatically and efficiently process, annotate and analyze large amounts of spoken and written text in linguistically and/or pragmatically meaningful ways. It has become more desirable than ever before for language and linguistics researchers who use corpora in their research to gain an adequate understanding of the relevant NLP technology to take full advantage of its capabilities.
This volume provides language and linguistics researchers with an accessible introduction to the state-of-the-art NLP technology that facilitates automatic annotation and analysis of large text corpora at both shallow and deep linguistic levels. The book covers a wide range of computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, together with detailed instructions on how to obtain, install and use each tool in different operating systems and platforms. The book illustrates how NLP technology has been applied in recent corpus-based language studies and suggests effective ways to better integrate such technology in future corpus linguistics research.
This book provides language and linguistics researchers with a valuable reference for corpus annotation and analysis.

Verbmobil: Foundations of Speech-to-Speech Translation (Hardcover, 2000 ed.): Wolfgang Wahlster Verbmobil: Foundations of Speech-to-Speech Translation (Hardcover, 2000 ed.)
Wolfgang Wahlster
R5,919 R4,830 Discovery Miles 48 300 Save R1,089 (18%) Ships in 12 - 17 working days

In 1992 it seemed very difficult to answer the question whether it would be possible to develop a portable system for the automatic recognition and translation of spon taneous speech. Previous research work on speech processing had focused on read speech only and international projects aimed at automated text translation had just been terminated without achieving their objectives. Within this context, the German Federal Ministry of Education and Research (BMBF) made a careful analysis of all national and international research projects conducted in the field of speech and language technology before deciding to launch an eight-year basic-research lead project in which research groups were to cooperate in an interdisciplinary and international effort covering the disciplines of computer science, computational linguistics, translation science, signal processing, communi cation science and artificial intelligence. At some point, the project comprised up to 135 work packages with up to 33 research groups working on these packages. The project was controlled by means of a network plan. Every two years the project sit uation was assessed and the project goals were updated. An international scientific advisory board provided advice for BMBF. A new scientific approach was chosen for this project: coping with the com plexity of spontaneous speech with all its pertinent phenomena such as ambiguities, self-corrections, hesitations and disfluencies took precedence over the intended lex icon size. Another important aspect was that prosodic information was exploited at all processing stages."

Translation, Brains and the Computer - A Neurolinguistic Solution to Ambiguity and Complexity in Machine Translation... Translation, Brains and the Computer - A Neurolinguistic Solution to Ambiguity and Complexity in Machine Translation (Hardcover, 1st ed. 2018)
Bernard Scott
R4,324 Discovery Miles 43 240 Ships in 12 - 17 working days

This book is about machine translation (MT) and the classic problems associated with this language technology. It examines the causes of these problems and, for linguistic, rule-based systems, attributes the cause to language's ambiguity and complexity and their interplay in logic-driven processes. For non-linguistic, data-driven systems, the book attributes translation shortcomings to the very lack of linguistics. It then proposes a demonstrable way to relieve these drawbacks in the shape of a working translation model (Logos Model) that has taken its inspiration from key assumptions about psycholinguistic and neurolinguistic function. The book suggests that this brain-based mechanism is effective precisely because it bridges both linguistically driven and data-driven methodologies. It shows how simulation of this cerebral mechanism has freed this one MT model from the all-important, classic problem of complexity when coping with the ambiguities of language. Logos Model accomplishes this by a data-driven process that does not sacrifice linguistic knowledge, but that, like the brain, integrates linguistics within a data-driven process. As a consequence, the book suggests that the brain-like mechanism embedded in this model has the potential to contribute to further advances in machine translation in all its technological instantiations.

Dynamic Taxonomies and Faceted Search - Theory, Practice, and Experience (Hardcover, 2009 ed.): Giovanni Maria Sacco, Yannis... Dynamic Taxonomies and Faceted Search - Theory, Practice, and Experience (Hardcover, 2009 ed.)
Giovanni Maria Sacco, Yannis Tzitzikas
R3,086 Discovery Miles 30 860 Ships in 10 - 15 working days

Current access paradigms for the Web, i.e., direct access via search engines or database queries and navigational access via static taxonomies, have recently been criticized because they are too rigid or simplistic to effectively cope with a large number of practical search applications. A third paradigm, dynamic taxonomies and faceted search, focuses on user-centered conceptual exploration, which is far more frequent in search tasks than retrieval using exact specification, and has rapidly become pervasive in modern Web data retrieval, especially in critical applications such as product selection for e-commerce. It is a heavily interdisciplinary area, where data modeling, human factors, logic, inference, and efficient implementations must be dealt with holistically.

Sacco, Tzitzikas, and their contributors provide a coherent roadmap to dynamic taxonomies and faceted search. The individual chapters, written by experts in each relevant field and carefully integrated by the editors, detail aspects like modeling, schema design, system implementation, search performance, and user interaction. The basic concepts of each area are introduced, and advanced topics and recent research are highlighted. An additional chapter is completely devoted to current and emerging application areas, including e-commerce, multimedia, multidimensional file systems, and geographical information systems.

The presentation targets advanced undergraduates, graduate students and researchers from different areas - from computer science to library and information science - as well as advanced practitioners. Given that research results are currently scattered among very different publications, this volume will allow researchers to get a coherent and comprehensive picture of the state of the art.

Innovative Methods and Technologies for Electronic Discourse Analysis (Hardcover, New): Hwee Ling Lim, Fay Sudweeks Innovative Methods and Technologies for Electronic Discourse Analysis (Hardcover, New)
Hwee Ling Lim, Fay Sudweeks
R5,103 Discovery Miles 51 030 Ships in 12 - 17 working days

With the advent of new media and Web 2.0 technologies, language and discourse have taken on new meaning, and the implications of this evolution on the nature of interpersonal communication must be addressed. Innovative Methods and Technologies for Electronic Discourse Analysis highlights research, applications, frameworks, and theories of online communication to explore recent advances in the manipulation and shaping of meaning in electronic discourse. This essential research collection will appeal to academic, research, and professional audiences engaged in the design, development, and distribution of effective communications technologies in educational, social, and linguistic contexts.

Current Issues in Computational Linguistics: In Honour of Don Walker (Hardcover, 1994 ed.): Antonio Zampolli, Nicoletta... Current Issues in Computational Linguistics: In Honour of Don Walker (Hardcover, 1994 ed.)
Antonio Zampolli, Nicoletta Calzolari, Martha Palmer
R3,407 Discovery Miles 34 070 Ships in 10 - 15 working days

With this volume in honour of Don Walker, Linguistica Computazionale con tinues the series of special issues dedicated to outstanding personalities who have made a significant contribution to the progress of our discipline and maintained a special collaborative relationship with our Institute in Pisa. I take the liberty of quoting in this preface some of the initiatives Pisa and Don Walker have jointly promoted and developed during our collaboration, because I think that they might serve to illustrate some outstanding features of Don's personality, in particular his capacity for identifying areas of potential convergence among the different scientific communities within our field and establishing concrete forms of coop eration. These initiatives also testify to his continuous and untiring work, dedi cated to putting people into contact and opening up communication between them, collecting and disseminating information, knowledge and resources, and creating shareable basic infrastructures needed for progress in our field. Our collaboration began within the Linguistics in Documentation group of the FID and continued in the framework of the CCL (International Committee for Computational Linguistics). In 1982 this collaboration was strengthened when, at CO LING in Prague, I was invited by Don to join him in the organization of a series of workshops with participants of the various communities interested in the study, development, and use of computational lexica."

Natural Language Information Retrieval (Hardcover, 1999 ed.): T. Strzalkowski Natural Language Information Retrieval (Hardcover, 1999 ed.)
T. Strzalkowski
R3,270 Discovery Miles 32 700 Ships in 10 - 15 working days

The last decade has been one of dramatic progress in the field of Natural Language Processing (NLP). This hitherto largely academic discipline has found itself at the center of an information revolution ushered in by the Internet age, as demand for human-computer communication and informa tion access has exploded. Emerging applications in computer-assisted infor mation production and dissemination, automated understanding of news, understanding of spoken language, and processing of foreign languages have given impetus to research that resulted in a new generation of robust tools, systems, and commercial products. Well-positioned government research funding, particularly in the U. S., has helped to advance the state-of-the art at an unprecedented pace, in no small measure thanks to the rigorous 1 evaluations. This volume focuses on the use of Natural Language Processing in In formation Retrieval (IR), an area of science and technology that deals with cataloging, categorization, classification, and search of large amounts of information, particularly in textual form. An outcome of an information retrieval process is usually a set of documents containing information on a given topic, and may consist of newspaper-like articles, memos, reports of any kind, entire books, as well as annotated image and sound files. Since we assume that the information is primarily encoded as text, IR is also a natural language processing problem: in order to decide if a document is relevant to a given information need, one needs to be able to understand its content."

Machine Translation and Translation Theory (Hardcover, Reprint 2011): Christa Hauenschild, Susanne Heizmann Machine Translation and Translation Theory (Hardcover, Reprint 2011)
Christa Hauenschild, Susanne Heizmann
R4,575 Discovery Miles 45 750 Ships in 12 - 17 working days

The series serves to propagate investigations into language usage, especially with respect to computational support. This includes all forms of text handling activity, not only interlingual translations, but also conversions carried out in response to different communicative tasks. Among the major topics are problems of text transfer and the interplay between human and machine activities.

Towards a Theoretical Framework for Analyzing Complex Linguistic Networks (Hardcover, 1st ed. 2016): Alexander Mehler, Andy... Towards a Theoretical Framework for Analyzing Complex Linguistic Networks (Hardcover, 1st ed. 2016)
Alexander Mehler, Andy Lucking, Sven Banisch, Philippe Blanchard, Barbara Job
R4,933 R3,764 Discovery Miles 37 640 Save R1,169 (24%) Ships in 12 - 17 working days

The aim of this book is to advocate and promote network models of linguistic systems that are both based on thorough mathematical models and substantiated in terms of linguistics. In this way, the book contributes first steps towards establishing a statistical network theory as a theoretical basis of linguistic network analysis the boarder of the natural sciences and the humanities. This book addresses researchers who want to get familiar with theoretical developments, computational models and their empirical evaluation in the field of complex linguistic networks. It is intended to all those who are interested in statistical models of linguistic systems from the point of view of network research. This includes all relevant areas of linguistics ranging from phonological, morphological and lexical networks on the one hand and syntactic, semantic and pragmatic networks on the other. In this sense, the volume concerns readers from many disciplines such as physics, linguistics, computer science and information science. It may also be of interest for the upcoming area of systems biology with which the chapters collected here share the view on systems from the point of view of network analysis.

Linked Data in Linguistics - Representing and Connecting Language Data and Language Metadata (Hardcover, 2012 ed.): Christian... Linked Data in Linguistics - Representing and Connecting Language Data and Language Metadata (Hardcover, 2012 ed.)
Christian Chiarcos, Sebastian Nordhoff, Sebastian Hellmann
R1,610 Discovery Miles 16 100 Ships in 10 - 15 working days

The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects.

The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources.

The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology).

This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).

Trends in Parsing Technology - Dependency Parsing, Domain Adaptation, and Deep Parsing (Hardcover, 2011 ed.): Harry Bunt, Paola... Trends in Parsing Technology - Dependency Parsing, Domain Adaptation, and Deep Parsing (Hardcover, 2011 ed.)
Harry Bunt, Paola Merlo, Joakim Nivre
R3,204 Discovery Miles 32 040 Ships in 10 - 15 working days

Computer parsing technology, which breaks down complex linguistic structures into their constituent parts, is a key research area in the automatic processing of human language. This volume is a collection of contributions from leading researchers in the field of natural language processing technology, each of whom detail their recent work which includes new techniques as well as results. The book presents an overview of the state of the art in current research into parsing technologies, focusing on three important themes: dependency parsing, domain adaptation, and deep parsing. The technology, which has a variety of practical uses, is especially concerned with the methods, tools and software that can be used to parse automatically. Applications include extracting information from free text or speech, question answering, speech recognition and comprehension, recommender systems, machine translation, and automatic summarization. New developments in the area of parsing technology are thus widely applicable, and researchers and professionals from a number of fields will find the material here required reading. As well as the other four volumes on parsing technology in this series this book has a breadth of coverage that makes it suitable both as an overview of the field for graduate students, and as a reference for established researchers in computational linguistics, artificial intelligence, computer science, language engineering, information science, and cognitive science. It will also be of interest to designers, developers, and advanced users of natural language processing systems, including applications such as spoken dialogue, text mining, multimodal human-computer interaction, and semantic web technology.

Natural Language Generation in Artificial Intelligence and Computational Linguistics (Hardcover, 1991 ed.): Cecile L. Paris,... Natural Language Generation in Artificial Intelligence and Computational Linguistics (Hardcover, 1991 ed.)
Cecile L. Paris, William R. Swartout, William C. Mann
R6,129 Discovery Miles 61 290 Ships in 10 - 15 working days

One of the aims of Natural Language Processing is to facilitate .the use of computers by allowing their users to communicate in natural language. There are two important aspects to person-machine communication: understanding and generating. While natural language understanding has been a major focus of research, natural language generation is a relatively new and increasingly active field of research. This book presents an overview of the state of the art in natural language generation, describing both new results and directions for new research. The principal emphasis of natural language generation is not only to facili tate the use of computers but also to develop a computational theory of human language ability. In doing so, it is a tool for extending, clarifying and verifying theories that have been put forth in linguistics, psychology and sociology about how people communicate. A natural language generator will typically have access to a large body of knowledge from which to select information to present to users as well as numer of expressing it. Generating a text can thus be seen as a problem of ous ways decision-making under multiple constraints: constraints from the propositional knowledge at hand, from the linguistic tools available, from the communicative goals and intentions to be achieved, from the audience the text is aimed at and from the situation and past discourse. Researchers in generation try to identify the factors involved in this process and determine how best to represent the factors and their dependencies."

Marcus Contextual Grammars (Hardcover, 1997 ed.): Gheorghe Paun Marcus Contextual Grammars (Hardcover, 1997 ed.)
Gheorghe Paun
R4,758 Discovery Miles 47 580 Ships in 12 - 17 working days

Marcus Contextual Grammars is the first monograph to present a class of grammars introduced about three decades ago, based on the fundamental linguistic phenomenon of strings-contexts interplay (selection). Most of the theoretical results obtained so far about the many variants of contextual grammars are presented with emphasis on classes of questions with relevance for applications in the study of natural language syntax: generative powers, descriptive and computational complexity, automata recognition, semilinearity, structure of the generated strings, ambiguity, regulated rewriting, etc. Constant comparison with families of languages in the Chomsky hierarchy is made. Connections with non-linguistic areas are established, such as molecular computing. Audience: Researchers and students in theoretical computer science (formal language theory and automata theory), computational linguistics, mathematical methods in linguistics, and linguists interested in formal models of syntax.

Real-World Natural Language Processing (Paperback): Masatoshi Hagiwara Real-World Natural Language Processing (Paperback)
Masatoshi Hagiwara
R2,225 R1,342 Discovery Miles 13 420 Save R883 (40%) Ships in 12 - 17 working days

Voice assistants, automated customer service agents, and other cutting-edge human-to-computer interactions rely on accurately interpreting language as it is written and spoken. Real-world Natural Language Processing teaches you how to create practical NLP applications without getting bogged down in complex language theory and the mathematics of deep learning. In this engaging book, you'll explore the core tools and techniques required to build a huge range of powerful NLP apps. about the technologyNatural language processing is the part of AI dedicated to understanding and generating human text and speech. NLP covers a wide range of algorithms and tasks, from classic functions such as spell checkers, machine translation, and search engines to emerging innovations like chatbots, voice assistants, and automatic text summarization. Wherever there is text, NLP can be useful for extracting meaning and bridging the gap between humans and machines. about the book Real-world Natural Language Processing teaches you how to create practical NLP applications using Python and open source NLP libraries such as AllenNLP and Fairseq. In this practical guide, you'll begin by creating a complete sentiment analyzer, then dive deep into each component to unlock the building blocks you'll use in all different kinds of NLP programs. By the time you're done, you'll have the skills to create named entity taggers, machine translation systems, spelling correctors, and language generation systems. what's inside Design, develop, and deploy basic NLP applications NLP libraries such as AllenNLP and Fairseq Advanced NLP concepts such as attention and transfer learning about the readerAimed at intermediate Python programmers. No mathematical or machine learning knowledge required. about the author Masato Hagiwara received his computer science PhD from Nagoya University in 2009, focusing on Natural Language Processing and machine learning. He has interned at Google and Microsoft Research, and worked at Baidu Japan, Duolingo, and Rakuten Institute of Technology. He now runs his own consultancy business advising clients, including startups and research institutions.

Trajectories through Knowledge Space - A Dynamic Framework for Machine Comprehension (Hardcover, 1994 ed.): Lawrence A. Bookman Trajectories through Knowledge Space - A Dynamic Framework for Machine Comprehension (Hardcover, 1994 ed.)
Lawrence A. Bookman
R6,044 Discovery Miles 60 440 Ships in 10 - 15 working days

As any history student will tell you, all events must be understood within their political and sociological context. Yet science provides an interesting counterpoint to this idea, since scientific ideas stand on their own merit, and require no reference to the time and place of their conception beyond perhaps a simple citation. Even so, the historical context of a scientific discovery casts a special light on that discovery - a light that motivates the work and explains its significance against a backdrop of related ideas. The book that you hold in your hands is unusually adept at presenting technical ideas in the context of their time. On one level, Larry Bookman has produced a manuscript to satisfy the requirements of a PhD program. If that was all he did, my preface would praise the originality of his ideas and attempt to summarize their significance. But this book is much more than an accomplished disser tation about some aspect of natural language - it is also a skillfully crafted tour through a vast body of computational, linguistic, neurophysiological, and psychological research."

Let's Ask AI - A Non-Technical Modern Approach to AI and Philosophy (Hardcover): Ingrid Seabra, Pedro Seabra, Angela Chan Let's Ask AI - A Non-Technical Modern Approach to AI and Philosophy (Hardcover)
Ingrid Seabra, Pedro Seabra, Angela Chan
R763 Discovery Miles 7 630 Ships in 12 - 17 working days
Integration of Natural Language and Vision Processing - Computational Models and Systems (Hardcover, Reprinted from ARTIFICIAL... Integration of Natural Language and Vision Processing - Computational Models and Systems (Hardcover, Reprinted from ARTIFICIAL INTELLIGENCE REVIEW 8:2-3; 5-6, 1995)
Paul Mc Kevitt
R4,739 Discovery Miles 47 390 Ships in 12 - 17 working days

Although there has been much progress in developing theories, models and systems in the areas of Natural Language Processing (NLP) and Vision Processing (VP) there has heretofore been little progress on integrating these subareas of Artificial Intelligence (AI). This book contains a set of edited papers addressing computational models and systems for the integration of NLP and VP. The papers focus on site descriptions such as that of the large Japanese $500 million Real World Computing (RWC) project, on historical philosophical issues, on systems which have been built and which integrate the processing of visual scenes together with language about them, and on spatial relations which appear to be the key to integration. The U.S.A., Japan and the EU are well reflected, showing up the fact that integration is a truly international issue. There is no doubt that all of this will be necessary for the InformationSuperHighways of the future.

Stochastically-Based Semantic Analysis (Hardcover, 1999 ed.): Wolfgang Minker, Alex Waibel, Joseph Mariani Stochastically-Based Semantic Analysis (Hardcover, 1999 ed.)
Wolfgang Minker, Alex Waibel, Joseph Mariani
R3,164 Discovery Miles 31 640 Ships in 10 - 15 working days

Stochastically-Based Semantic Analysis investigates the problem of automatic natural language understanding in a spoken language dialog system. The focus is on the design of a stochastic parser and its evaluation with respect to a conventional rule-based method. Stochastically-Based Semantic Analysis will be of most interest to researchers in artificial intelligence, especially those in natural language processing, computational linguistics, and speech recognition. It will also appeal to practicing engineers who work in the area of interactive speech systems.

Grammars for Language and Genes - Theoretical and Empirical Investigations (Hardcover, 2012 ed.): David Chiang Grammars for Language and Genes - Theoretical and Empirical Investigations (Hardcover, 2012 ed.)
David Chiang; Foreword by Aravind K. Joshi
R3,020 Discovery Miles 30 200 Ships in 10 - 15 working days

Grammars are gaining importance in natural language processing and computational biology as a means of encoding theories and structuring algorithms. But one serious obstacle to applications of grammars is that formal language theory traditionally classifies grammars according to their weak generative capacity (what sets of strings they generate) and tends to ignore strong generative capacity (what sets of structural descriptions they generate) even though the latter is more relevant to applications.

This book develops and demonstrates a framework for carrying out rigorous comparisons of grammar formalisms in terms of their usefulness for applications, focusing on three areas of application: statistical parsing, natural language translation, and biological sequence analysis. These results should pave the way for theoretical research to pursue results that are more directed towards applications, and for practical research to explore the use of advanced grammar formalisms more easily.

Reinforcement Learning for Adaptive Dialogue Systems - A Data-driven Methodology for Dialogue Management and Natural Language... Reinforcement Learning for Adaptive Dialogue Systems - A Data-driven Methodology for Dialogue Management and Natural Language Generation (Hardcover, 2011)
Verena Rieser, Oliver Lemon
R3,046 Discovery Miles 30 460 Ships in 10 - 15 working days

The past decade has seen a revolution in the field of spoken dialogue systems. As in other areas of Computer Science and Artificial Intelligence, data-driven methods are now being used to drive new methodologies for system development and evaluation. This book is a unique contribution to that ongoing change. A new methodology for developing spoken dialogue systems is described in detail. The journey starts and ends with human behaviour in interaction, and explores methods for learning from the data, for building simulation environments for training and testing systems, and for evaluating the results. The detailed material covers: Spoken and Multimodal dialogue systems, Wizard-of-Oz data collection, User Simulation methods, Reinforcement Learning, and Evaluation methodologies. The book is a research guide for students and researchers with a background in Computer Science, AI, or Machine Learning. It navigates through a detailed case study in data-driven methods for development and evaluation of spoken dialogue systems. Common challenges associated with this approach are discussed and example solutions are provided. This work provides insights, lessons, and inspiration for future research and development - not only for spoken dialogue systems in particular, but for data-driven approaches to human-machine interaction in general.

Adaptive Parsing - Self-Extending Natural Language Interfaces (Hardcover, 1992 ed.): Jill Fain Lehman Adaptive Parsing - Self-Extending Natural Language Interfaces (Hardcover, 1992 ed.)
Jill Fain Lehman
R3,175 Discovery Miles 31 750 Ships in 10 - 15 working days

As the computer gradually automates human-oriented tasks in multiple environ ments, the interface between computers and the ever-wider population of human users assumes progressively increasing importance. In the office environment, for instance, clerical tasks such as document filing and retrieval, and higher-level tasks such as scheduling meetings, planning trip itineraries, and producing documents for publication, are being partially or totally automated. The range of users for office oriented software includes clerks, secretaries, and businesspersons, none of whom are predominantly computer literate. The same phenomenon is echoed in the factory production line, in the securities trading floor, in government agencies, in educa tional institutions, and even in the home. The arcane command languages of yes teryear have proven too high a barrier for smooth acceptance of computerized func tions into the workplace, no matter how useful these functions may be. Computer naive users simply do not take the time to learn intimidating and complex computer interfaces. In order to place the functionality of modem computers at the disposition of diverse user populations, a number of different approaches have been tried, many meeting with a significant measure of success, to wit: special courses to train users in the simpler command languages (such as MS-DOS), designing point-and-click menu/graphics interfaces that require much less user familiarization (illustrated most clearly in the Apple Macintosh), and interacting with the user in his or her language of choice."

Naive Semantics for Natural Language Understanding (Hardcover, 1988 ed.): Kathleen Dahlgren Naive Semantics for Natural Language Understanding (Hardcover, 1988 ed.)
Kathleen Dahlgren
R4,599 Discovery Miles 45 990 Ships in 10 - 15 working days

This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "Commonsense Reasoning with Verbs" (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Global Challenge - Managing People…
Vladimir Pucik, Ingmar Bjoerkman, … Hardcover R5,050 Discovery Miles 50 500
Define Me Divine me - A Poetic Display…
Phoebe Garnsworthy Hardcover R653 R589 Discovery Miles 5 890
The A to Z Book of Birds, An ABC for…
Michael P Earney Hardcover R873 Discovery Miles 8 730
Courageous Cultures - How to Build Teams…
Karin Hurt, David Dye Paperback R512 R476 Discovery Miles 4 760
Think You Know It All? Liverpool FC…
Max Wadsworth Paperback R250 R223 Discovery Miles 2 230
8 Rules of Love - How to Find it, Keep…
Jay Shetty Hardcover R575 R517 Discovery Miles 5 170
Sasol Voëls Van Suider-Afrika (Met…
Ian Sinclair, Phil Hockey Paperback R600 R541 Discovery Miles 5 410
Karma
Annie Besant Paperback R406 Discovery Miles 4 060
Challenges in Risk Analysis for Science…
Tracey Temple, Melissa Ladyman Hardcover R3,533 Discovery Miles 35 330
Fast Facts - For Inquisitive Minds
Helen Lewis Paperback R165 R153 Discovery Miles 1 530

 

Partners