|
Showing 1 - 4 of
4 matches in All Departments
Addressing the current tension within the artificial intelligence
community between advocates of powerful symbolic representations
that lack efficient learning procedures and advocates of relatively
simple learning procedures that lack the ability to represent
complex structures effectively. The six contributions in
Connectionist Symbol Processing address the current tension within
the artificial intelligence community between advocates of powerful
symbolic representations that lack efficient learning procedures
and advocates of relatively simple learning procedures that lack
the ability to represent complex structures effectively. The
authors seek to extend the representational power of connectionist
networks without abandoning the automatic learning that makes these
networks interesting.Aware of the huge gap that needs to be
bridged, the authors intend their contributions to be viewed as
exploratory steps in the direction of greater representational
power for neural networks. If successful, this research could make
it possible to combine robust general purpose learning procedures
and inherent representations of artificial intelligence-a synthesis
that could lead to new insights into both representation and
learning.
This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. The author introduces the basic principles of pattern recognition and then goes on to describe techniques for modelling probability density functions, and discusses the properties and relative merits of the multi-layer perceptron and radial basis function network models. This book is designed with graduate students in mind and throughout the text it motivates the use of various forms of error functions and reviews the principal algorithms for error function minimization. Bishop also covers the fundamental topics of data processing, feature extraction, and prior knowledge and concludes with an extensive treatment of Bayesian techniques and their applications to neural networks.
Since its founding in 1989 by Terrence Sejnowski, Neural
Computation has become the leading journal in the field.
Foundations of Neural Computationcollects, by topic, the most
significant papers that have appeared in the journal over the past
nine years.This volume of Foundations of Neural Computation, on
unsupervised learning algorithms, focuses on neural network
learning algorithms that do not require an explicit teacher. The
goal of unsupervised learning is to extract an efficient internal
representation of the statistical structure implicit in the inputs.
These algorithms provide insights into the development of the
cerebral cortex and implicit learning in humans. They are also of
interest to engineers working in areas such as computer vision and
speech recognition who seek efficient representations of raw input
data.
|
You may like...
Gloria
Sam Smith
CD
R407
Discovery Miles 4 070
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.