|
Showing 1 - 5 of
5 matches in All Departments
This book is the third volume in a series that provides a hands-on
perspective on the evolving theories associated with Roger Schank
and his students. The primary focus of this volume is on
constructing explanations. All of the chapters relate to the
problem of building computer programs that can develop hypotheses
about what might have caused an observed event. Because most
researchers in natural language processing don't really want to
work on inference, memory, and learning issues, most of their
sample text fragments are chosen carefully to de-emphasize the need
for non text-related reasoning.
The ability to come up with hypotheses about what is really going
on in a story is a hallmark of human intelligence. The biggest
difference between truly intelligent readers and less intelligent
ones is the extent to which the reader can go beyond merely
understanding the explicit statements being communicated. Achieving
a creative level of understanding means developing hypotheses about
questions for which there may be no conclusively correct answer at
all. The focus of the lab, during the period documented in this
book, was to work on getting a computer program to do that.
The volume adopts a case-based approach to the construction of
explanations which suggests that the main steps in the process of
explaining a given anomaly are as follows:
* Retrieve an explanation that might be relevant to the anomaly.
* Evaluate whether the retrieved explanation makes sense when
applied to the current anomaly.
* Adapt the explanation to produce a new variant that fits better
if the retrieved explanation doesn't fit the anomaly
perfectly.
Introducing issues in dynamic memory and case-based reasoning, this
comprehensive volume presents extended descriptions of four major
programming efforts conducted at Yale during the past several
years. Each descriptive chapter is followed by a companion chapter
containing the micro program version of the information.
The authors emphasize that the only true way to learn and
understand any AI program is to program it yourself. To this end,
the book develops a deeper and richer understanding of the content
through LISP programming instructions that allow the running,
modification, and extension of the micro programs developed by the
authors.
First published in 1986. Routledge is an imprint of Taylor &
Francis, an informa company.
This book is the third volume in a series that provides a hands-on
perspective on the evolving theories associated with Roger Schank
and his students. The primary focus of this volume is on
constructing explanations. All of the chapters relate to the
problem of building computer programs that can develop hypotheses
about what might have caused an observed event. Because most
researchers in natural language processing don't really want to
work on inference, memory, and learning issues, most of their
sample text fragments are chosen carefully to de-emphasize the need
for non text-related reasoning.
The ability to come up with hypotheses about what is really going
on in a story is a hallmark of human intelligence. The biggest
difference between truly intelligent readers and less intelligent
ones is the extent to which the reader can go beyond merely
understanding the explicit statements being communicated. Achieving
a creative level of understanding means developing hypotheses about
questions for which there may be no conclusively correct answer at
all. The focus of the lab, during the period documented in this
book, was to work on getting a computer program to do that.
The volume adopts a case-based approach to the construction of
explanations which suggests that the main steps in the process of
explaining a given anomaly are as follows:
* Retrieve an explanation that might be relevant to the anomaly.
* Evaluate whether the retrieved explanation makes sense when
applied to the current anomaly.
* Adapt the explanation to produce a new variant that fits better
if the retrieved explanation doesn't fit the anomaly
perfectly.
Artificial intelligence research has thrived in the years since
this best-selling AI classic was first published. The revision
encompasses these advances by adapting its coding to Common Lisp,
the well-documented language standard, and by bringing together
even more useful programming tools. Today's programmers in AI will
find this volume's superior coverage of programming techniques and
easily applicable style anything but common.
|
|