![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
The 1997 Les Houches workshop on "Dynamical Network in Physics and Biology" was the third in a series of meetings "At the Frontier between Physics and Biology." Our objective with these workshops is to create a truly interdisciplinary forum for researchers working on outstanding problems in biology, but using different approaches (physical, chemical or biological). Generally speaking, the biologists are trained in the particular and motivated by the specifics, while, in contrast, the physicists deal with generic and "universal" models. All agree about the necessity of developing "robust" models. The specific aim of the workshop was to bridge the gap between physics and biology in the particular field of interconnected dynamical networks. The proper functioning of a living organism of any complexity requires the coordinated activity of a great number of "units." Units, or, in physical terms, degrees of freedom that couple to one another, typically form networks. The physical or biological properties of interconnected networks may drastically differ from those of the individual units: the whole is not simply an assembly of its parts, as can be demonstrated by the following examples. Above a certain (critical) concentration the metallic islands, randomly distributed in an insulating matrix, form an interconnected network. At this point the macroscopic conductivity of the system becomes finite and the amorphous metal is capable of carrying current. The value of the macroscopic conductivity typically is very different from the conductivity of the individual metallic islands.
As we move around in our environment, and interact with it, many of the most important problems we face involve the processing of spatial information. We have to be able to navigate by perceiving and remembering the locations and orientations of the objects around us relative to ourself; we have to sense and act upon these objects; and we need to move through space to position ourselves in favourable locations or to avoid dangerous ones. While this appears so simple that we don't even think about it, the difficulty of solving these problems has been shown in the repeated failure of artificial systems to perform these kinds of tasks efficiently. In contrast, humans and other animals routinely overcome these problems every single day. This book examines some of the neural substrates and mechanisms that support these remarkable abilities. The hippocampus and the parietal cortex have been implicated in various core spatial behaviours, such as the ability to localise an object and navigate to it. Damage to these areas in humans and animals leads to impairment of these spatial functions. This collection of papers, written by internationally recognized experts in the field, reviews the evidence that each area is involved in spatial cognition, examines the mechanisms underlying the generation of spatial behaviours, and considers the relative roles of the parietal and hippocampal areas, including how each interacts with the other. The papers integrate a wide range of theoretical and experimental approaches, and touch on broader issues relating to memory and imagery. As such, this book represents the state of the art of current research into the neural basis of spatial cognition. It should be of interest to anyone - researchers or graduate students - working in the areas of cognitive neuroscience, neuroanatomy, neuropsychology, and cognition generally.
Providing an in-depth treatment of neural network models, this volume explains and proves the main results in a clear and accessible way. It presents the essential principles of nonlinear dynamics as derived from neurobiology, and investigates the stability, convergence behaviour and capacity of networks. Also included are sections on stochastic networks and simulated annealing, presented using Markov processes rather than statistical physics, and a chapter on backpropagation. Each chapter ends with a suggested project designed to help the reader develop an integrated knowledge of the theory, placing it within a practical application domain. Neural Network Models: Theory and Projects concentrates on the essential parameters and results that will enable the reader to design hardware or software implementations of neural networks and to assess critically existing commercial products.
About This Book This book is about training methods - in particular, fast second-order training methods - for multi-layer perceptrons (MLPs). MLPs (also known as feed-forward neural networks) are the most widely-used class of neural network. Over the past decade MLPs have achieved increasing popularity among scientists, engineers and other professionals as tools for tackling a wide variety of information processing tasks. In common with all neural networks, MLPsare trained (rather than programmed) to carryout the chosen information processing function. Unfortunately, the (traditional' method for trainingMLPs- the well-knownbackpropagation method - is notoriously slow and unreliable when applied to many prac tical tasks. The development of fast and reliable training algorithms for MLPsis one of the most important areas ofresearch within the entire field of neural computing. The main purpose of this book is to bring to a wider audience a range of alternative methods for training MLPs, methods which have proved orders of magnitude faster than backpropagation when applied to many training tasks. The book also addresses the well-known (local minima' problem, and explains ways in which fast training methods can be com bined with strategies for avoiding (or escaping from) local minima. All the methods described in this book have a strong theoretical foundation, drawing on such diverse mathematical fields as classical optimisation theory, homotopic theory and stochastic approximation theory.
This volume collects together refereed versions of twenty-five papers presented at the 4th Neural Computation and Psychology Workshop, held at University College London in April 1997. The "NCPW" workshop series is now well established as a lively forum which brings together researchers from such diverse disciplines as artificial intelligence, mathematics, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on connectionist modelling in psychology. The general theme of this fourth workshop in the series was "Connectionist Repre sentations," a topic which not only attracted participants from all these fields, but from allover the world as well. From the point of view of the conference organisers focusing on representational issues had the advantage that it immediately involved researchers from all branches of neural computation. Being so central both to psychology and to connectionist modelling, it is one area about which everyone in the field has their own strong views, and the diversity and quality of the presentations and, just as importantly, the discussion which followed them, certainly attested to this."
This book constitutes the refereed proceedings of the 6th
International Conference on Evolutionary Programming, EP 97, held
in Indianapolis, IN, USA, in April 1997.
A fundamental objective of Artificial Intelligence (AI) is the creation of in telligent computer programs. In more modest terms AI is simply con cerned with expanding the repertoire of computer applications into new domains and to new levels of efficiency. The motivation for this effort comes from many sources. At a practical level there is always a demand for achieving things in more efficient ways. Equally, there is the technical challenge of building programs that allow a machine to do something a machine has never done before. Both of these desires are contained within AI and both provide the inspirational force behind its development. In terms of satisfying both of these desires there can be no better example than machine learning. Machines that can learn have an in-built effi ciency. The same software can be applied in many applications and in many circumstances. The machine can adapt its behaviour so as to meet the demands of new, or changing, environments without the need for costly re-programming. In addition, a machine that can learn can be ap plied in new domains with the genuine potential for innovation. In this sense a machine that can learn can be applied in areas where little is known about possible causal relationships, and even in circumstances where causal relationships are judged not to exist. This last aspect is of major significance when considering machine learning as applied to fi nancial forecasting."
In almost all areas of science and engineering, the use of computers and microcomputers has, in recent years, transformed entire subject areas. What was not even considered possible a decade or two ago is now not only possible but is also part of everyday practice. As a result, a new approach usually needs to be taken (in order) to get the best out of a situation. What is required is now a computer's eye view of the world. However, all is not rosy in this new world. Humans tend to think in two or three dimensions at most, whereas computers can, without complaint, work in n dimensions, where n, in practice, gets bigger and bigger each year. As a result of this, more complex problem solutions are being attempted, whether or not the problems themselves are inherently complex. If information is available, it might as well be used, but what can be done with it? Straightforward, traditional computational solutions to this new problem of complexity can, and usually do, produce very unsatisfactory, unreliable and even unworkable results. Recently however, artificial neural networks, which have been found to be very versatile and powerful when dealing with difficulties such as nonlinearities, multivariate systems and high data content, have shown their strengths in general in dealing with complex problems. This volume brings together a collection of top researchers from around the world, in the field of artificial neural networks."
This publication deals with the application of advanced digital signal processing techniques and neural networks to various telecommunication problems. The editor presents the latest research results in areas such as arrays, mobile channels, acoustic echo cancellation, speech coding and adaptive filtering in varying environments.
Evolutionary Learning Algorithms for Neural Adaptive Control is an advanced textbook, which investigates how neural networks and genetic algorithms can be applied to difficult adaptive control problems which conventional results are either unable to solve , or for which they can not provide satisfactory results. It focuses on the principles involved, rather than on the modelling of the applications themselves, and therefore provides the reader with a good introduction to the fundamental issues involved.
This book includes a selection of twelve carefully revised papers
chosen from the papers accepted for presentation at the 4th
IEEE/Nagoya-University World Wisepersons Workshop held in Nagoya in
November 1995.
Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence.
Concepts for Neural Networks - A Survey provides a wide-ranging survey of concepts relating to the study of neural networks. It includes chapters explaining the basics of both artificial neural networks and the mathematics of neural networks, as well as chapters covering the more philosophical background to the topic and consciousness. There is also significant emphasis on the practical use of the techniques described in the area of robotics. Containing contributions from some of the world's leading specialists in their fields (including Dr. Ton Coolen and Professor Igor Aleksander), this volume will provide the reader with a good, general introduction to the basic concepts needed to understan d and use neural network technology.
This book constitutes the refereed proceedings of the sixth
International Conference on Artificial Neural Networks - ICANN 96,
held in Bochum, Germany in July 1996.
This volume presents a collection of revised refereed papers
selected from the contributions presented at the European
Conference on Artificial Evolution, AE '95, held in Brest, France,
in September 1995; also included are a few papers from the
predecessor conference, AE '94.
This book constitutes the refereed proceedings of the International
Workshop on Energy Minimization Methods in Computer Vision and
Pattern Recognition, EMMCVPR'97, held in Venice, Italy, in May
1997.
This book presents 14 rigorously reviewed revised papers selected from more than 50 submissions for the 1994 IEEE/ Nagoya-University World Wisepersons Workshop, WWW'94, held in August 1994 in Nagoya, Japan.The combination of approaches based on fuzzy logic, neural networks and genetic algorithms are expected to open a new paradigm of machine learning for the realization of human-like information processing systems. The first six papers in this volume are devoted to the combination of fuzzy logic and neural networks; four papers are on how to combine fuzzy logic and genetic algorithms. Four papers investigate challenging applications of fuzzy systems and of fuzzy-genetic algorithms.
A learning system can be defined as a system which can adapt its behaviour to become more effective at a particular task or set of tasks. It consists of an architecture with a set of variable parameters and an algorithm. Learning systems are useful in many fields, one of the major areas being in control and system identification. This work covers major aspects of learning systems: system architecture, choice of performance index and methods measuring error. Major learning algorithms are explained, including proofs of convergence. Artificial neural networks, which are an important class of learning systems and have been subject to rapidly increasing popularity, are discussed. Where appropriate, examples have been given to demonstrate the practical use of techniques developed in the text. System identification and control using multi-layer networks and CMAC (Cerebellar Model Articulation Controller) are also presented.
Neural Networks presents concepts of neural-network models and techniques of parallel distributed processing in a three-step approach: - A brief overview of the neural structure of the brain and the history of neural-network modeling introduces to associative memory, preceptrons, feature-sensitive networks, learning strategies, and practical applications. - The second part covers subjects like statistical physics of spin glasses, the mean-field theory of the Hopfield model, and the "space of interactions" approach to the storage capacity of neural networks. - The final part discusses nine programs with practical demonstrations of neural-network models. The software and source code in C are on a 3 1/2" MS-DOS diskette can be run with Microsoft, Borland, Turbo-C, or compatible compilers.
Neural networks is a field of research which has enjoyed rapid expansion in both the academic and industrial research communities. This volume contains papers presented at the Third Annual SNN Symposium on Neural Networks to be held in Nijmegen, The Netherlands, 14 - 15 September 1995. The papers are divided into two sections: the first gives an overview of new developments in neurobiology, the cognitive sciences, robotics, vision and data modelling. The second presents working neural network solutions to real industrial problems, including process control, finance and marketing. The resulting volume gives a comprehensive view of the state of the art in 1995 and will provide essential reading for postgraduate students and academic/industrial researchers.
Neural networks are a computing paradigm that is finding increasing attention among computer scientists. In this book, theoretical laws and models previously scattered in the literature are brought together into a general theory of artificial neural nets. Always with a view to biology and starting with the simplest nets, it is shown how the properties of models change when more general computing elements and net topologies are introduced. Each chapter contains examples, numerous illustrations, and a bibliography. The book is aimed at readers who seek an overview of the field or who wish to deepen their knowledge. It is suitable as a basis for university courses in neurocomputing.
This volume contains the thoroughly refereed and revised papers accepted for presentation at the IJCAI '91 Workshops on Fuzzy Logic and Fuzzy Control, held during the International Joint Conference on AI at Sydney, Australia in August 1991. The 14 technical contributions are devoted to several theoretical and applicational aspects of fuzzy logic and fuzzy control; they are presented in sections on theoretical aspects of fuzzy reasoning and fuzzy control, fuzzy neural networks, fuzzy control applications, fuzzy logic planning, and fuzzy circuits. In addition, there is a substantial introduction by the volume editors on the latest developments in the field that brings the papers presented into line.
This book presents carefully revised versions of tutorial lectures
given during a School on Artificial Neural Networks for the
industrial world held at the University of Limburg in Maastricht,
Belgium.
This volume contains the proceedings of the 15th International
Conference on Application and Theory of Petri Nets, held at
Zaragoza, Spain in June 1994. The annual Petri net conferences are
usually visited by some 150 - 200 Petri net experts coming from
academia and industry all over the world. |
You may like...
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R12,947
Discovery Miles 129 470
Icle Publications Plc-Powered Data…
Polly Patrick, Angela Peery
Paperback
R705
Discovery Miles 7 050
Avatar-Based Control, Estimation…
Vardan Mkrttchian, Ekaterina Aleshina, …
Hardcover
R6,699
Discovery Miles 66 990
|