|
Showing 1 - 2 of
2 matches in All Departments
Theoretical Advances in Neural Computation and Learning brings
together in one volume some of the recent advances in the
development of a theoretical framework for studying neural
networks. A variety of novel techniques from disciplines such as
computer science, electrical engineering, statistics, and
mathematics have been integrated and applied to develop
ground-breaking analytical tools for such studies. This volume
emphasizes the computational issues in artificial neural networks
and compiles a set of pioneering research works, which together
establish a general framework for studying the complexity of neural
networks and their learning capabilities. This book represents one
of the first efforts to highlight these fundamental results, and
provides a unified platform for a theoretical exploration of neural
computation. Each chapter is authored by a leading researcher
and/or scholar who has made significant contributions in this area.
Part 1 provides a complexity theoretic study of different models of
neural computation. Complexity measures for neural models are
introduced, and techniques for the efficient design of networks for
performing basic computations, as well as analytical tools for
understanding the capabilities and limitations of neural
computation are discussed. The results describe how the
computational cost of a neural network increases with the problem
size. Equally important, these results go beyond the study of
single neural elements, and establish to computational power of
multilayer networks. Part 2 discusses concepts and results
concerning learning using models of neural computation. Basic
concepts such as VC-dimension and PAC-learning are introduced, and
recentresults relating neural networks to learning theory are
derived. In addition, a number of the chapters address fundamental
issues concerning learning algorithms, such as accuracy and rate of
convergence, selection of training data, and efficient algorithms
for learning useful classes of mappings.
For any research field to have a lasting impact, there must be a
firm theoretical foundation. Neural networks research is no
exception. Some of the founda tional concepts, established several
decades ago, led to the early promise of developing machines
exhibiting intelligence. The motivation for studying such machines
comes from the fact that the brain is far more efficient in visual
processing and speech recognition than existing computers.
Undoubtedly, neu robiological systems employ very different
computational principles. The study of artificial neural networks
aims at understanding these computational prin ciples and applying
them in the solutions of engineering problems. Due to the recent
advances in both device technology and computational science, we
are currently witnessing an explosive growth in the studies of
neural networks and their applications. It may take many years
before we have a complete understanding about the mechanisms of
neural systems. Before this ultimate goal can be achieved, an swers
are needed to important fundamental questions such as (a) what can
neu ral networks do that traditional computing techniques cannot,
(b) how does the complexity of the network for an application
relate to the complexity of that problem, and (c) how much training
data are required for the resulting network to learn properly?
Everyone working in the field has attempted to answer these
questions, but general solutions remain elusive. However,
encouraging progress in studying specific neural models has been
made by researchers from various disciplines."
|
|