![]() |
![]() |
Your cart is empty |
||
Showing 1 - 11 of 11 matches in All Departments
The interdisciplinary field of cognitive science brings together elements of cognitive psychology, mathematics, perception, and linguistics. Focusing on the main areas of exploration in this field today, Cognitive Science presents comprehensive overviews of research findings and discusses new cross-over areas of interest. Contributors represent the most senior and well-established names in the field. This volume serves as a high-level introduction, with sufficient breadth to be a graduate-level text, and enough depth to be a valued reference source to researchers.
Recent years have seen an explosion of new mathematical results on
learning and processing in neural networks. This body of results
rests on a breadth of mathematical background which even few
specialists possess. In a format intermediate between a textbook
and a collection of research articles, this book has been assembled
to present a sample of these results, and to fill in the necessary
background, in such areas as computability theory, computational
complexity theory, the theory of analog computation, stochastic
processes, dynamical systems, control theory, time-series analysis,
Bayesian analysis, regularization theory, information theory,
computational learning theory, and mathematical statistics.
The philosophy of cognitive science has recently become one of the most exciting and fastest growing domains of philosophical inquiry and analysis. Until the early 1980s, nearly all of the models developed treated cognitive processes -- like problem solving, language comprehension, memory, and higher visual processing -- as rule-governed symbol manipulation. However, this situation has changed dramatically over the last half dozen years. In that period there has been an enormous shift of attention toward connectionist models of cognition that are inspired by the network-like architecture of the brain. Because of their unique architecture and style of processing, connectionist systems are generally regarded as radically different from the more traditional symbol manipulation models. This collection was designed to provide philosophers who have been working in the area of cognitive science with a forum for expressing their views on these recent developments. Because the symbol-manipulating paradigm has been so important to the work of contemporary philosophers, many have watched the emergence of connectionism with considerable interest. The contributors take very different stands toward connectionism, but all agree that the potential exists for a radical shift in the way many philosophers think of various aspects of cognition. Exploring this potential and other philosophical dimensions of connectionist research is the aim of this volume.
Composed of three sections, this book presents the most popular
training algorithm for neural networks: backpropagation. The first
section presents the theory and principles behind backpropagation
as seen from different perspectives such as statistics, machine
learning, and dynamical systems. The second presents a number of
network architectures that may be designed to match the general
concepts of Parallel Distributed Processing with backpropagation
learning. Finally, the third section shows how these principles can
be applied to a number of different fields related to the cognitive
sciences, including control, speech recognition, robotics, image
processing, and cognitive psychology. The volume is designed to
provide both a solid theoretical foundation and a set of examples
that show the versatility of the concepts. Useful to experts in the
field, it should also be most helpful to students seeking to
understand the basic principles of connectionist learning and to
engineers wanting to add neural networks in general -- and
backpropagation in particular -- to their set of problem-solving
methods.
The philosophy of cognitive science has recently become one of the most exciting and fastest growing domains of philosophical inquiry and analysis. Until the early 1980s, nearly all of the models developed treated cognitive processes -- like problem solving, language comprehension, memory, and higher visual processing -- as rule-governed symbol manipulation. However, this situation has changed dramatically over the last half dozen years. In that period there has been an enormous shift of attention toward connectionist models of cognition that are inspired by the network-like architecture of the brain. Because of their unique architecture and style of processing, connectionist systems are generally regarded as radically different from the more traditional symbol manipulation models. This collection was designed to provide philosophers who have been working in the area of cognitive science with a forum for expressing their views on these recent developments. Because the symbol-manipulating paradigm has been so important to the work of contemporary philosophers, many have watched the emergence of connectionism with considerable interest. The contributors take very different stands toward connectionism, but all agree that the potential exists for a radical shift in the way many philosophers think of various aspects of cognition. Exploring this potential and other philosophical dimensions of connectionist research is the aim of this volume.
Recent years have seen an explosion of new mathematical results on learning and processing in neural networks. This body of results rests on a breadth of mathematical background which even few specialists possess. In a format intermediate between a textbook and a collection of research articles, this book has been assembled to present a sample of these results, and to fill in the necessary background, in such areas as computability theory, computational complexity theory, the theory of analog computation, stochastic processes, dynamical systems, control theory, time-series analysis, Bayesian analysis, regularization theory, information theory, computational learning theory, and mathematical statistics. Mathematical models of neural networks display an amazing richness and diversity. Neural networks can be formally modeled as computational systems, as physical or dynamical systems, and as statistical analyzers. Within each of these three broad perspectives, there are a number of particular approaches. For each of 16 particular mathematical perspectives on neural networks, the contributing authors provide introductions to the background mathematics, and address questions such as: * Exactly what mathematical systems are used to model neural networks from the given perspective? * What formal questions about neural networks can then be addressed? * What are typical results that can be obtained? and * What are the outstanding open problems? A distinctive feature of this volume is that for each perspective presented in one of the contributed chapters, the first editor has provided a moderately detailed summary of the formal results and the requisite mathematical concepts. These summaries are presented in four chapters that tie together the 16 contributed chapters: three develop a coherent view of the three general perspectives -- computational, dynamical, and statistical; the other assembles these three perspectives into a unified overview of the neural networks field.
Written for cognitive scientists, psychologists, computer
scientists, engineers, and neuroscientists, this book provides an
accessible overview of how computational network models are being
used to model neurobiological phenomena. Each chapter presents a
representative example of how biological data and network models
interact with the authors' research. The biological phenomena cover
network- or circuit-level phenomena in humans and other
higher-order vertebrates.
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.
Composed of three sections, this book presents the most popular
training algorithm for neural networks: backpropagation. The first
section presents the theory and principles behind backpropagation
as seen from different perspectives such as statistics, machine
learning, and dynamical systems. The second presents a number of
network architectures that may be designed to match the general
concepts of Parallel Distributed Processing with backpropagation
learning. Finally, the third section shows how these principles can
be applied to a number of different fields related to the cognitive
sciences, including control, speech recognition, robotics, image
processing, and cognitive psychology. The volume is designed to
provide both a solid theoretical foundation and a set of examples
that show the versatility of the concepts. Useful to experts in the
field, it should also be most helpful to students seeking to
understand the basic principles of connectionist learning and to
engineers wanting to add neural networks in general -- and
backpropagation in particular -- to their set of problem-solving
methods.
Written for cognitive scientists, psychologists, computer
scientists, engineers, and neuroscientists, this book provides an
accessible overview of how computational network models are being
used to model neurobiological phenomena. Each chapter presents a
representative example of how biological data and network models
interact with the authors' research. The biological phenomena cover
network- or circuit-level phenomena in humans and other
higher-order vertebrates.
|
![]() ![]() You may like...
|