![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
The technology of neural networks has attracted much attention in recent years. Their ability to learn nonlinear relationships is widely appreciated and is utilized in many different types of applications; modelling of dynamic systems, signal processing, and control system design being some of the most common. The theory of neural computing has matured considerably over the last decade and many problems of neural network design, training and evaluation have been resolved. This book provides a comprehensive introduction to the most popular class of neural network, the multilayer perceptron, and shows how it can be used for system identification and control. It aims to provide the reader with a sufficient theoretical background to understand the characteristics of different methods, to be aware of the pit-falls and to make proper decisions in all situations. The subjects treated include: System identification: multilayer perceptrons; how to conduct informative experiments; model structure selection; training methods; model validation; pruning algorithms. Control: direct inverse, internal model, feedforward, optimal and predictive control; feedback linearization and instantaneous-linearization-based controllers. Case studies: prediction of sunspot activity; modelling of a hydraulic actuator; control of a pneumatic servomechanism; water-level control in a conical tank. The book is very application-oriented and gives detailed and pragmatic recommendations that guide the user through the plethora of methods suggested in the literature. Furthermore, it attempts to introduce sound working procedures that can lead to efficient neural network solutions. This will make the book invaluable to the practitioner and as a textbook in courses with a significant hands-on component.
Cellular Nonlinear/Neural Network (CNN) technology is both a revolutionary concept and an experimentally proven new computing paradigm. Analogic cellular computers based on CNNs are set to change the way analog signals are processed. This unique undergraduate level textbook includes many examples and exercises, including CNN simulator and development software accessible via the Internet. It is an ideal introduction to CNNs and analogic cellular computing for students, researchers and engineers from a wide range of disciplines. Leon Chua, co-inventor of the CNN, and Tamàs Roska are both highly respected pioneers in the field.
Biology undergraduates, medical students and life-science graduate students often have limited mathematical skills. Similarly, physics, math and engineering students have little patience for the detailed facts that make up much of biological knowledge. Teaching computational neuroscience as an integrated discipline requires that both groups be brought forward onto common ground. This book does this by making ancillary material available in an appendix and providing basic explanations without becoming bogged down in unnecessary details. The book will be suitable for undergraduates and beginning graduate students taking a computational neuroscience course and also to anyone with an interest in the uses of the computer in modeling the nervous system.
The two-volume set LNCS 2686 and LNCS 2687 constitute the refereed proceedings of the 7th International Work-Conference on Artificial and Natural Neural Networks, IWANN 2003, held in Maó, Menorca, Spain in June 2003. The 197 revised papers presented were carefully reviewed and selected for inclusion in the book and address the following topics: mathematical and computational methods in neural modelling, neurophysiological data analysis and modelling, structural and functional models of neurons, learning and other plasticity phenomena, complex systems dynamics, cognitive processes and artificial intelligence, methodologies for net design, bio-inspired systems and engineering, and applications in a broad variety of fields.
Artificial Intelligence is concerned with producing devices that help or replace human beings in their daily activities. Neural-symbolic learning systems play a central role in this task by combining, and trying to benefit from, the advantages of both the neural and symbolic paradigms of artificial intelligence. This book provides a comprehensive introduction to the field of neural-symbolic learning systems, and an invaluable overview of the latest research issues in this area. It is divided into three sections, covering the main topics of neural-symbolic integration - theoretical advances in knowledge representation and learning, knowledge extraction from trained neural networks, and inconsistency handling in neural-symbolic systems. Each section provides a balance of theory and practice, giving the results of applications using real-world problems in areas such as DNA sequence analysis, power systems fault diagnosis, and software requirements specifications. Neural-Symbolic Learning Systems will be invaluable reading for researchers and graduate students in Engineering, Computing Science, Artificial Intelligence, Machine Learning and Neurocomputing. It will also be of interest to Intelligent Systems practitioners and anyone interested in applications of hybrid artificial intelligence systems.
Turing's connectionism provides a detailed and in-depth analysis of Turing's almost forgotten ideas on connectionist machines. In a little known paper entitled "Intelligent Machinery", Turing already investigated connectionist models as early as 1948. Unfortunately, his work was dismissed by his employer as a "schoolboy essay" and went unpublished until 1968, 14 years after his death.In this book, Christof Teuscher analyzes all aspects of Turing's "unorganized machines". Turing himself also proposed a sort of genetic algorithm to train the networks. This idea has been resumed by the author and genetic algorithms are used to build and train Turing's unorganized machines. Teuscher's work starts from Turing's initial ideas, but importantly goes beyond them. Many new kinds of machines and new aspects are considered, e.g., hardware implementation, analysis of the complex dynamics of the networks, hypercomputation, and learning algorithms.
This volume looks at financial prediction from a broad range of perspectives. It covers: - the economic arguments - the practicalities of the markets - how predictions are used - how predictions are made - how predictions are turned into something usable (asset locations) It combines a discussion of standard theory with state-of-the-art material on a wide range of information processing techniques as applied to cutting-edge financial problems. All the techniques are demonstrated with real examples using actual market data, and show that it is possible to extract information from very noisy, sparse data sets. Aimed primarily at researchers in financial prediction, time series analysis and information processing, this book will also be of interest to quantitative fund managers and other professionals involved in financial prediction.
This unique compendium presents a comprehensive and self-contained theory of material development under imperfect information and its applications. The book describes new approaches to synthesis and selection of materials with desirable characteristics. Such approaches provide the ability of systematic and computationally effective analysis in order to predict composition, structure and related properties of new materials.The volume will be a useful advanced textbook for graduate students. It is also suitable for academicians and practitioners who wish to have fundamental models in new material synthesis and selection.
It is generally understood that the present approachs to computing do not have the performance, flexibility, and reliability of biological information processing systems. Although there is a comprehensive body of knowledge regarding how information processing occurs in the brain and central nervous system this has had little impact on mainstream computing so far. This book presents a broad spectrum of current research into biologically inspired computational systems and thus contributes towards developing new computational approaches based on neuroscience. The 39 revised full papers by leading researchers were carefully selected and reviewed for inclusion in this anthology. Besides an introductory overview by the volume editors, the book offers topical parts on modular organization and robustness, timing and synchronization, and learning and memory storage.
This book constitutes, together with its companion LNCS 2084, the refereed proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks, IWANN 2001, held in Granada, Spain in June 2001. The 200 revised papers presented were carefully reviewed and selected for inclusion in the proceedings. The papers are organized in sections on foundations of connectionism, biophysical models of neurons, structural and functional models of neurons, learning and other plasticity phenomena, complex systems dynamics, artificial intelligence and cognitive processes, methodology for nets design, nets simulation and implementation, bio-inspired systems and engineering, and other applications in a variety of fields.
The papers in this volume present theoretical insights and reports on successful applications of articifical neural networks and genetic algorithms. A dual affinity with biology is shown as several papers deal with cognition, neurocontrol, and biologically inspired brain models and others describe successful applications of computational methods to biology and environmental science. Theoretical contributions cover a variety of topics including nonlinear approximation by feedforward networks, representation of spiking perceptrons by classical ones, recursive networks and associative memories, learning and generalization, population attractors, and proposal and analysis of new genetic operators or measures. These theoretical studies are augmented by a wide selection of application-oriented papers on topics ranging from signal processing, control, pattern recognition and times series prediction to routing tasks. To keep track of the rapid development of the field of computational intelligence, the scope of the conference has been extended to hybrid methods and tools for which neural networks and evolutionary algorithms are combined with methods of soft computing, fuzzy logic, probabilistic computing, and symbolic artificial intelligence, to computer-intensive methods in control and data processing, and to data mining in meteorology and air pollution.
Control of Flexible-link Manipulators Using Neural Networks addresses the difficulties that arise in controlling the end-point of a manipulator that has a significant amount of structural flexibility in its links. The non-minimum phase characteristic, coupling effects, nonlinearities, parameter variations and unmodeled dynamics in such a manipulator all contribute to these difficulties. Control strategies that ignore these uncertainties and nonlinearities generally fail to provide satisfactory closed-loop performance. This monograph develops and experimentally evaluates several intelligent (neural network based) control techniques to address the problem of controlling the end-point of flexible-link manipulators in the presence of all the aforementioned difficulties. To highlight the main issues, a very flexible-link manipulator whose hub exhibits a considerable amount of friction is considered for the experimental work. Four different neural network schemes are proposed and implemented on the experimental test-bed. The neural networks are trained and employed as online controllers.
Folding networks, a generalisation of recurrent neural networks to tree structured inputs, are investigated as a mechanism to learn regularities on classical symbolic data, for example. The architecture, the training mechanism, and several applications in different areas are explained. Afterwards a theoretical foundation, proving that the approach is appropriate as a learning mechanism in principle, is presented: Their universal approximation ability is investigated- including several new results for standard recurrent neural networks such as explicit bounds on the required number of neurons and the super Turing capability of sigmoidal recurrent networks. The information theoretical learnability is examined - including several contribution to distribution dependent learnability, an answer to an open question posed by Vidyasagar, and a generalisation of the recent luckiness framework to function classes. Finally, the complexity of training is considered - including new results on the loading problem for standard feedforward networks with an arbitrary multilayered architecture, a correlated number of neurons and training set size, a varying number of hidden neurons but fixed input dimension, or the sigmoidal activation function, respectively.
Neural Networks are a new, interdisciplinary tool for information processing. Neurocomputing being successfully introduced to structural problems which are difficult or even impossible to be analysed by standard computers (hard computing). The book is devoted to foundations and applications of NNs in the structural mechanics and design of structures.
The Self-Organizing Map (SOM), with its variants, is the most popular artificial neural network algorithm in the unsupervised learning category. Many fields of science have adopted the SOM as a standard analytical tool: in statistics,signal processing, control theory, financial analyses, experimental physics, chemistry and medicine. A new area is organization of very large document collections. The SOM is also one of the most realistic models of the biological brain functions.This new edition includes a survey of over 2000 contemporary studies to cover the newest results; the case examples were provided with detailed formulae, illustrations and tables; a new chapter on software tools for SOM was written, other chapters were extended or reorganized.
This book contains the proceedings of the conference ANNIMAB-l, held 13-16 May 2000 in Goteborg, Sweden. The conference was organized by the Society for Artificial Neural Networks in Medicine and Biology (ANNIMAB-S), which was established to promote research within a new and genuinely cross-disciplinary field. Forty-two contributions were accepted for presentation; in addition to these, S invited papers are also included. Research within medicine and biology has often been characterised by application of statistical methods for evaluating domain specific data. The growing interest in Artificial Neural Networks has not only introduced new methods for data analysis, but also opened up for development of new models of biological and ecological systems. The ANNIMAB-l conference is focusing on some of the many uses of artificial neural networks with relevance for medicine and biology, specifically: * Medical applications of artificial neural networks: for better diagnoses and outcome predictions from clinical and laboratory data, in the processing of ECG and EEG signals, in medical image analysis, etc. More than half of the contributions address such clinically oriented issues. * Uses of ANNs in biology outside clinical medicine: for example, in models of ecology and evolution, for data analysis in molecular biology, and (of course) in models of animal and human nervous systems and their capabilities. * Theoretical aspects: recent developments in learning algorithms, ANNs in relation to expert systems and to traditional statistical procedures, hybrid systems and integrative approaches.
This volume provides an overview of important work carried out by Professor Walter Freeman of the University of Berkeley, California, USA. Collecting together his published works over the last 35 years, it charts his groundbreaking research into perception and other cognitive operations in animals and humans and looks at how this can be applied to computer hardware to provide the foundations for novel - and greatly improved - machine intelligence. It provides a step-by-step description of the concepts and data needed by electrical engineers, computer scientists and cognitivists to understand and emulate pattern recognition in biological systems at a level of competence which has not yet been matched by any form of Artificial Intelligence. It offers a unique blend of theory and experiment and, historically, it also demonstrates the impact of computers on the design, execution, and interpretation of experiments in neurophysiology over the past five decades.
Independent Component Analysis (ICA) is a fast developing area of intense research interest. Following on from Self-Organising Neural Networks: Independent Component Analysis and Blind Signal Separation, this book reviews the significant developments of the past year.It covers topics such as the use of hidden Markov methods, the independence assumption, and topographic ICA, and includes tutorial chapters on Bayesian and variational approaches. It also provides the latest approaches to ICA problems, including an investigation into certain "hard problems" for the very first time.Comprising contributions from the most respected and innovative researchers in the field, this volume will be of interest to students and researchers in computer science and electrical engineering; research and development personnel in disciplines such as statistical modelling and data analysis; bio-informatic workers; and physicists and chemists requiring novel data analysis methods.
From the contents: Neural networks - theory and applications: NNs (= neural networks) classifier on continuous data domains- quantum associative memory - a new class of neuron-like discrete filters to image processing - modular NNs for improving generalisation properties - presynaptic inhibition modelling for image processing application - NN recognition system for a curvature primal sketch - NN based nonlinear temporal-spatial noise rejection system - relaxation rate for improving Hopfield network - Oja's NN and influence of the learning gain on its dynamics Genetic algorithms - theory and applications: transposition: a biological-inspired mechanism to use with GAs (= genetic algorithms) - GA for decision tree induction - optimising decision classifications using GAs - scheduling tasks with intertask communication onto multiprocessors by GAs - design of robust networks with GA - effect of degenerate coding on GAs - multiple traffic signal control using a GA - evolving musical harmonisation - niched-penalty approach for constraint handling in GAs - GA with dynamic population size - GA with dynamic niche clustering for multimodal function optimisation Soft computing and uncertainty: self-adaptation of evolutionary constructed decision trees by information spreading - evolutionary programming of near optimal NNs
Neural networks have had considerable success in a variety of disciplines including engineering, control, and financial modelling. However a major weakness is the lack of established procedures for testing mis-specified models and the statistical significance of the various parameters which have been estimated. This is particularly important in the majority of financial applications where the data generating processes are dominantly stochastic and only partially deterministic. Based on the latest, most significant developments in estimation theory, model selection and the theory of mis-specified models, this volume develops neural networks into an advanced financial econometrics tool for non-parametric modelling. It provides the theoretical framework required, and displays the efficient use of neural networks for modelling complex financial phenomena. Unlike most other books in this area, this one treats neural networks as statistical devices for non-linear, non-parametric regression analysis.
This book constitutes, together with its compagnion LNCS 1606, the
refereed proceedings of the International Work-Conference on
Artificial and Neural Networks, IWANN'99, held in Alicante, Spain
in June 1999.
This is the first book that attempts to bring together what is known about the fundamental mechanisms that underlie the development of the cortex in mammals. Ranging from the emergence of the forebrain from the neural plate to the functioning adult form, the authors draw on evidence from several species to provide a detailed description of processes at each stage. Where appropriate, evidence is extrapolated from non-mammalian species to generate hypotheses about mammalian development. In contrast to other texts of developmental biology, Mechanisms of Cortical Development integrates information on regulatory processes at the levels of molecules, cells and metworks. The authors draw together an extensive literature on cellular development and structural morphology, biochemical and genetic events and hypotheses that have been subject to mathematical modelling. Important metholdogies, such as transgenics and formal modelling, are explained for the non-specialist. Major future challenges are clearly identified. This is a unique contribution to the literature, combining the fundamentals of experimental developmental neurobiology with accessible neural modelling. It will be essential reading for neuroscientists in general as well as those with a particular interest in development.
Fuzzy sets were introduced by Zadeh (1965) as a means of representing and manipulating data that was not precise, but rather fuzzy. Fuzzy logic pro vides an inference morphology that enables approximate human reasoning capabilities to be applied to knowledge-based systems. The theory of fuzzy logic provides a mathematical strength to capture the uncertainties associ ated with human cognitive processes, such as thinking and reasoning. The conventional approaches to knowledge representation lack the means for rep resentating the meaning of fuzzy concepts. As a consequence, the approaches based on first order logic and classical probablity theory do not provide an appropriate conceptual framework for dealing with the representation of com monsense knowledge, since such knowledge is by its nature both lexically imprecise and noncategorical. The developement of fuzzy logic was motivated in large measure by the need for a conceptual framework which can address the issue of uncertainty and lexical imprecision. Some of the essential characteristics of fuzzy logic relate to the following [242]. * In fuzzy logic, exact reasoning is viewed as a limiting case of ap proximate reasoning. * In fuzzy logic, everything is a matter of degree. * In fuzzy logic, knowledge is interpreted a collection of elastic or, equivalently, fuzzy constraint on a collection of variables. * Inference is viewed as a process of propagation of elastic con straints. * Any logical system can be fuzzified. There are two main characteristics of fuzzy systems that give them better performance fur specific applications.
The 1997 Les Houches workshop on "Dynamical Network in Physics and Biology" was the third in a series of meetings "At the Frontier between Physics and Biology." Our objective with these workshops is to create a truly interdisciplinary forum for researchers working on outstanding problems in biology, but using different approaches (physical, chemical or biological). Generally speaking, the biologists are trained in the particular and motivated by the specifics, while, in contrast, the physicists deal with generic and "universal" models. All agree about the necessity of developing "robust" models. The specific aim of the workshop was to bridge the gap between physics and biology in the particular field of interconnected dynamical networks. The proper functioning of a living organism of any complexity requires the coordinated activity of a great number of "units." Units, or, in physical terms, degrees of freedom that couple to one another, typically form networks. The physical or biological properties of interconnected networks may drastically differ from those of the individual units: the whole is not simply an assembly of its parts, as can be demonstrated by the following examples. Above a certain (critical) concentration the metallic islands, randomly distributed in an insulating matrix, form an interconnected network. At this point the macroscopic conductivity of the system becomes finite and the amorphous metal is capable of carrying current. The value of the macroscopic conductivity typically is very different from the conductivity of the individual metallic islands.
This volume presents the theory and applications of self-organising neural network models which perform the Independent Component Analysis (ICA) transformation and Blind Source Separation (BSS). It is largely self-contained, covering the fundamental concepts of information theory, higher order statistics and information geometry. Neural models for instantaneous and temporal BSS and their adaptation algorithms are presented and studied in detail. There is also in-depth coverage of the following application areas; noise reduction, speech enhancement in noisy environments, image enhancement, feature extraction for classification, data analysis and visualisation, data mining and biomedical data analysis. Self-Organising Neural Networks will be of interest to postgraduate students and researchers in Connectionist AI, Signal Processing and Neural Networks, research and development workers, and technology development engineers and research engineers. |
![]() ![]() You may like...
Brachypodium Genomics - Methods and…
Gaurav Sablok, Hikmet Budak, …
Hardcover
Responsible Genomic Data Sharing…
Xiaoqian Jiang, Haixu Tang
Paperback
R2,782
Discovery Miles 27 820
Twin Research for Everyone - From…
Adam D. Tarnoki, David L. Tarnoki, …
Paperback
R3,831
Discovery Miles 38 310
Analysis of Repeated Measures Data
M. Ataharul Islam, Rafiqul I Chowdhury
Hardcover
R3,239
Discovery Miles 32 390
Hidden Markov Models in Finance
Rogemar S. Mamon, Robert J Elliott
Hardcover
R2,989
Discovery Miles 29 890
Secondary Findings in Genomic Research
Martin Langanke, Pia Erdmann, …
Paperback
R3,698
Discovery Miles 36 980
|