![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
Neural networks provide a powerful new technology to model and control nonlinear and complex systems. In this book, the authors present a detailed formulation of neural networks from the information-theoretic viewpoint. They show how this perspective provides new insights into the design theory of neural networks. In particular they show how these methods may be applied to the topics of supervised and unsupervised learning including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from several different scientific disciplines, notably cognitive scientists, engineers, physicists, statisticians, and computer scientists, will find this to be a very valuable introduction to this topic.
Humans are often extraordinary at performing practical reasoning. There are cases where the human computer, slow as it is, is faster than any artificial intelligence system. Are we faster because of the way we perceive knowledge as opposed to the way we represent it? The authors address this question by presenting neural network models that integrate the two most fundamental phenomena of cognition: our ability to learn from experience, and our ability to reason from what has been learned. This book is the first to offer a self-contained presentation of neural network models for a number of computer science logics, including modal, temporal, and epistemic logics. By using a graphical presentation, it explains neural networks through a sound neural-symbolic integration methodology, and it focuses on the benefits of integrating effective robust learning with expressive reasoning capabilities. The book will be invaluable reading for academic researchers, graduate students, and senior undergraduates in computer science, artificial intelligence, machine learning, cognitive science and engineering. It will also be of interest to computational logicians, and professional specialists on applications of cognitive, hybrid and artificial intelligence systems.
Information and communication technologies are increasingly prolific worldwide, exposing the issues and challenges of the assimilation of existing living environments to the shift in technological communication infrastructure. ""Reflexing Interfaces"" discusses the application of complex theories in information and communication technology, with a focus on the interaction between living systems and information technologies. This innovative view provides researcher, scholars, and IT professionals with a fundamental resource on such compelling topics as virtual reality; fuzzy logic systems; and complexity science in artificial intelligence, evolutionary computation, neural networks, and 3-D modeling.
One of the most challenging and fascinating problems of the theory of neural nets is that of asymptotic behavior, of how a system behaves as time proceeds. This is of particular relevance to many practical applications. Here we focus on association, generalization, and representation. We turn to the last topic first. The introductory chapter, "Global Analysis of Recurrent Neural Net works," by Andreas Herz presents an in-depth analysis of how to construct a Lyapunov function for various types of dynamics and neural coding. It includes a review of the recent work with John Hopfield on integrate-and fire neurons with local interactions. The chapter, "Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns" by Ken Miller, explains how the primary visual cortex may asymptotically gain its specific structure through a self-organization process based on Hebbian learning. His argu ment since has been shown to be rather susceptible to generalization."
Aimed at graduates and potential researchers, this is a comprehensive introduction to the mathematical aspects of spin glasses and neural networks. It should be useful to mathematicians in probability theory and theoretical physics, and to engineers working in theoretical computer science.
Increasingly, neural networks are used and implemented in a wide range of fields and have become useful tools in probabilistic analysis and prediction theory. This booka "unique in the literaturea "studies the application of neural networks to the analysis of time series of sea data, namely significant wave heights and sea levels. The particular problem examined as a starting point is the reconstruction of missing data, a general problem that appears in many cases of data analysis. Specific topics covered include: * Presentation of general information on the phenomenology of waves and tides, as well as related technical details of various measuring processes used in the study * Description of the model of wind waves (WAM) used to determine the spectral function of waves and predict the behavior of SWH (significant wave heights); a comparison is made of the reconstruction of SWH time series obtained by means of neural network algorithms versus SWH computed by WAM * Principles of artificial neural networks, approximation theory, and extreme-value theory necessary to understand the main applications of the book. * Application of artificial neural networks (ANN) to reconstruct SWH and sea levels (SL) * Comparison of the ANN approach and the approximation operator approach, displaying the advantages of ANN * Examination of extreme-event analysis applied to the time series of sea data in specific locations * Generalizations of ANN to treat analogous problems for other types of phenomena and data This book, a careful blend of theory and applications, is an excellent introduction to the use of ANN, which may encourage readers to try analogous approachesin other important application areas. Researchers, practitioners, and advanced graduate students in neural networks, hydraulic and marine engineering, prediction theory, and data analysis will benefit from the results and novel ideas presented in this useful resource.
This volume is devoted to interactive and iterative processes of decision-making- I2 Fuzzy Decision Making, in brief. Decision-making is inherently interactive. Fuzzy sets help realize human-machine communication in an efficient way by facilitating a two-way interaction in a friendly and transparent manner. Human-centric interaction is of paramount relevance as a leading guiding design principle of decision support systems. The volume provides the reader with an updated and in-depth material on the conceptually appealing and practically sound methodology and practice of I2 Fuzzy Decision Making. The book engages a wealth of methods of fuzzy sets and Granular Computing, brings new concepts, architectures and practice of fuzzy decision-making providing the reader with various application studies. The book is aimed at a broad audience of researchers and practitioners in numerous disciplines in which decision-making processes play a pivotal role and serve as a vehicle to produce solutions to existing problems. Those involved in operations research, management, various branches of engineering, social sciences, logistics, and economics will benefit from the exposure to the subject matter. The book may serve as a useful and timely reference material for graduate students and senior undergraduate students in courses on decision-making, Computational Intelligence, operations research, pattern recognition, risk management, and knowledge-based systems.
This book is devoted to a novel conceptual theoretical framework of neuro science and is an attempt to show that we can postulate a very small number of assumptions and utilize their heuristics to explain a very large spectrum of brain phenomena. The major assumption made in this book is that inborn and acquired neural automatisms are generated according to the same func tional principles. Accordingly, the principles that have been revealed experi mentally to govern inborn motor automatisms, such as locomotion and scratching, are used to elucidate the nature of acquired or learned automat isms. This approach allowed me to apply the language of control theory to describe functions of biological neural networks. You, the reader, can judge the logic of the conclusions regarding brain phenomena that the book derives from these assumptions. If you find the argument flawless, one can call it common sense and consider that to be the best praise for a chain of logical conclusions. For the sake of clarity, I have attempted to make this monograph as readable as possible. Special attention has been given to describing some of the concepts of optimal control theory in such a way that it will be under standable to a biologist or physician. I have also included plenty of illustra tive examples and references designed to demonstrate the appropriateness and applicability of these conceptual theoretical notions for the neurosciences."
Computation in Neurons and Neural Systems contains the collected papers of the 1993 Conference on Computation and Neural Systems which was held between July 31--August 7, in Washington, DC. These papers represent a cross-section of the state-of-the-art research work in the field of computational neuroscience, and includes coverage of analysis and modeling work as well as results of new biological experimentation.
Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregation. Feedback is a dominant feature of the structural organization of the brain. Recurrent neural networks have been studied extensively in the physical literature, starting with the ground breaking work of John Hop field (1982)."
This book proposes soft computing techniques for segmenting real-life images in applications such as image processing, image mining, video surveillance, and intelligent transportation systems. The book suggests hybrids deriving from three main approaches: fuzzy systems, primarily used for handling real-life problems that involve uncertainty; artificial neural networks, usually applied for machine cognition, learning, and recognition; and evolutionary computation, mainly used for search, exploration, efficient exploitation of contextual information, and optimization. The contributed chapters discuss both the strengths and the weaknesses of the approaches, and the book will be valuable for researchers and graduate students in the domains of image processing and computational intelligence.
Human Face Recognition Using Third-Order Synthetic Neural Networks explores the viability of the application of High-order synthetic neural network technology to transformation-invariant recognition of complex visual patterns. High-order networks require little training data (hence, short training times) and have been used to perform transformation-invariant recognition of relatively simple visual patterns, achieving very high recognition rates. The successful results of these methods provided inspiration to address more practical problems which have grayscale as opposed to binary patterns (e.g., alphanumeric characters, aircraft silhouettes) and are also more complex in nature as opposed to purely edge-extracted images - human face recognition is such a problem. Human Face Recognition Using Third-Order Synthetic Neural Networks serves as an excellent reference for researchers and professionals working on applying neural network technology to the recognition of complex visual patterns.
This book and sofwtare package provide a complement to the traditional data analysis tools already widely available. It presents an introduction to the analysis of data using neural networks. Neural network functions discussed include multilayer feed-forward networks using error back propagation, genetic algorithm-neural network hybrids, generalized regression neural networks, learning quantizer networks, and self-organizing feature maps. In an easy-to-use, Windows-based environment it offers a wide range of data analytic tools which are not usually found together: these include genetic algorithms, probabilistic networks, as well as a number of related techniques that support these - notably, fractal dimension analysis, coherence analysis, and mutual information analysis. The text presents a number of worked examples and case studies using Simulnet, the software package which comes with the book. Readers are assumed to have a basic understanding of computers and elementary mathematics. With this background, a reader will find themselves quickly conducting sophisticated hands-on analyses of data sets.
Papers comprising this volume were presented at the first IEEE Conference on [title] held in Denver, Co., Nov. 1987. As the limits of the digital computer become apparent, interest in neural networks has intensified. Ninety contributions discuss what neural networks can do, addressing topics that in
"Takagi-Sugeno Fuzzy Systems Non-fragile H-infinity Filtering"
investigates the problem of non-fragile H-infinity filter design
for Takagi-Sugeno (T-S) fuzzy systems. Given a T-S fuzzy system,
the objective of this book is to design an H-infinity filter with
the gain variations such that the filtering error system guarantees
a prescribed H-infinity performance level. Furthermore, it
demonstrates that the solution of non-fragile H-infinity filter
design problem can be obtained by solving a set of linear matrix
inequalities (LMIs).
Micromechanical manufacturing based on microequipment creates new possibi- ties in goods production. If microequipment sizes are comparable to the sizes of the microdevices to be produced, it is possible to decrease the cost of production drastically. The main components of the production cost - material, energy, space consumption, equipment, and maintenance - decrease with the scaling down of equipment sizes. To obtain really inexpensive production, labor costs must be reduced to almost zero. For this purpose, fully automated microfactories will be developed. To create fully automated microfactories, we propose using arti?cial neural networks having different structures. The simplest perceptron-like neural network can be used at the lowest levels of microfactory control systems. Adaptive Critic Design, based on neural network models of the microfactory objects, can be used for manufacturing process optimization, while associative-projective neural n- works and networks like ART could be used for the highest levels of control systems. We have examined the performance of different neural networks in traditional image recognition tasks and in problems that appear in micromechanical manufacturing. We and our colleagues also have developed an approach to mic- equipment creation in the form of sequential generations. Each subsequent gene- tion must be of a smaller size than the previous ones and must be made by previous generations. Prototypes of ?rst-generation microequipment have been developed and assessed.
This book offers a new, theoretical approach to information dynamics, i.e., information processing in complex dynamical systems. The presentation establishes a consistent theoretical framework for the problem of discovering knowledge behind empirical, dynamical data and addresses applications in information processing and coding in dynamical systems. This will be an essential reference for those in neural computing, information theory, nonlinear dynamics and complex systems modeling.
In recent years there has been tremendous activity in computational neuroscience resulting from two parallel developments. On the one hand, our knowledge of real nervous systems has increased dramatically over the years; on the other, there is now enough computing power available to perform realistic simulations of actual neural circuits. This is leading to a revolution in quantitative neuroscience, which is attracting a growing number of scientists from non-biological disciplines. These scientists bring with them expertise in signal processing, information theory, and dynamical systems theory that has helped transform our ways of approaching neural systems. New developments in experimental techniques have enabled biologists to gather the data necessary to test these new theories. While we do not yet understand how the brain sees, hears or smells, we do have testable models of specific components of visual, auditory, and olfactory processing. Some of these models have been applied to help construct artificial vision and hearing systems. Similarly, our understanding of motor control has grown to the point where it has become a useful guide in the development of artificial robots. Many neuroscientists believe that we have only scratched the surface, and that a more complete understanding of biological information processing is likely to lead to technologies whose impact will propel another industrial revolution. Neural Systems: Analysis and Modeling contains the collected papers of the 1991 Conference on Analysis and Modeling of Neural Systems (AMNS), and the papers presented at the satellite symposium on compartmental modeling, held July 23-26, 1992, in San Francisco, California. The papers included, present an update of the most recent developments in quantitative analysis and modeling techniques for the study of neural systems.
The theoretical foundations of Neural Networks and Analog Computation conceptualize neural networks as a particular type of computer consisting of multiple assemblies of basic processors interconnected in an intricate structure. Examining these networks under various resource constraints reveals a continuum of computational devices, several of which coincide with well-known classical models. On a mathematical level, the treatment of neural computations enriches the theory of computation but also explicated the computational complexity associated with biological networks, adaptive engineering tools, and related models from the fields of control theory and nonlinear dynamics. The material in this book will be of interest to researchers in a variety of engineering and applied sciences disciplines. In addition, the work may provide the base of a graduate-level seminar in neural networks for computer science students.
International Conference Intelligent Network and Intelligence in Networks (2IN97) French Ministry of Telecommunication, 20 Avenue de Segur, Paris -France September 2-5, 1997 Organizer: IFIP WG 6.7 -Intelligent Networks Sponsorship: IEEE, Alcatel, Ericsson, France Telecom, Nokia, Nordic Teleoperators, Siemens, Telecom Finland, Lab. PRiSM Aim of the conference To identify and study current issues related to the development of intelligent capabilities in networks. These issues include the development and distribution of services in broadband and mobile networks. This conference belongs to a series of IFIP conference on Intelligent Network. The first one took place in Lappeeranta August 94, the second one in Copenhagen, August 95. The proceedings of both events have been published by Chapman&Hall. IFIP Working Group 6.7 on IN has concentrated with the research and development of Intelligent Networks architectures. First the activities have concentrated in service creation, service management, database issues, feature interaction, IN performance and advanced signalling for broadband services. Later on the research activities have turned towards the distribution of intelligence in networks and IN applications to multimedia and mobility. The market issues of new services have also been studied. From the system development point of view, topics from OMG and TINA-C have been considered.
Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks.
Artificial neural networks are used to model systems that receive inputs and produce outputs. The relationships between the inputs and outputs and the representation parameters are critical issues in the design of related engineering systems, and sensitivity analysis concerns methods for analyzing these relationships. Perturbations of neural networks are caused by machine imprecision, and they can be simulated by embedding disturbances in the original inputs or connection weights, allowing us to study the characteristics of a function under small perturbations of its parameters. This is the first book to present a systematic description of sensitivity analysis methods for artificial neural networks. It covers sensitivity analysis of multilayer perceptron neural networks and radial basis function neural networks, two widely used models in the machine learning field. The authors examine the applications of such analysis in tasks such as feature selection, sample reduction, and network optimization. The book will be useful for engineers applying neural network sensitivity analysis to solve practical problems, and for researchers interested in foundational problems in neural networks.
The purpose of this monograph is to give the broad aspects of nonlinear identification and control using neural networks. It consists of three parts:- an introduction to the fundamental principles of neural networks;- several methods for nonlinear identification using neural networks;- various techniques for nonlinear control using neural networks.A number of simulated and industrial examples are used throughout the monograph to demonstrate the operation of nonlinear identification and control techniques using neural networks. It should be emphasised that the methods and systems of nonlinear control have not progressed as rapidly as those for linear control. Comparatively speaking, at the present time, they are still in the development stage. We believe that the fundamental theory, various design methods and techniques, and several applications of nonlinear identification and control using neural networks that are presented in this monograph will enable the reader to analyse and synthesise nonlinear control systems quantitatively.
Computational neuroscience is best defined by its focus on understanding the nervous systems as a computational device rather than by a particular experimental technique. Accordinlgy, while the majority of the papers in this book describe analysis and modeling efforts, other papers describe the results of new biological experiments explicitly placed in the context of computational issues. The distribution of subjects in Computation and Neural Systems reflects the current state of the field. In addition to the scientific results presented here, numerous papers also describe the ongoing technical developments that are critical for the continued growth of computational neuroscience. Computation and Neural Systems includes papers presented at the First Annual Computation and Neural Systems meeting held in San Francisco, CA, July 26--29, 1992.
Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered. |
You may like...
The Book Every Leader Needs To Read…
Abed Tau, Adriaan Groenewald, …
Paperback
|