![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
Fuzzy sets were introduced by Zadeh (1965) as a means of representing and manipulating data that was not precise, but rather fuzzy. Fuzzy logic pro vides an inference morphology that enables approximate human reasoning capabilities to be applied to knowledge-based systems. The theory of fuzzy logic provides a mathematical strength to capture the uncertainties associ ated with human cognitive processes, such as thinking and reasoning. The conventional approaches to knowledge representation lack the means for rep resentating the meaning of fuzzy concepts. As a consequence, the approaches based on first order logic and classical probablity theory do not provide an appropriate conceptual framework for dealing with the representation of com monsense knowledge, since such knowledge is by its nature both lexically imprecise and noncategorical. The developement of fuzzy logic was motivated in large measure by the need for a conceptual framework which can address the issue of uncertainty and lexical imprecision. Some of the essential characteristics of fuzzy logic relate to the following [242]. * In fuzzy logic, exact reasoning is viewed as a limiting case of ap proximate reasoning. * In fuzzy logic, everything is a matter of degree. * In fuzzy logic, knowledge is interpreted a collection of elastic or, equivalently, fuzzy constraint on a collection of variables. * Inference is viewed as a process of propagation of elastic con straints. * Any logical system can be fuzzified. There are two main characteristics of fuzzy systems that give them better performance fur specific applications.
This volume, written by leading researchers, presents methods of combining neural nets to improve their performance. The techniques include ensemble-based approaches, where a variety of methods are used to create a set of different nets trained on the same task, and modular approaches, where a task is decomposed into simpler problems. The techniques are also accompanied by an evaluation of their relative effectiveness and their application to a variety of problems.
The 1997 Les Houches workshop on "Dynamical Network in Physics and Biology" was the third in a series of meetings "At the Frontier between Physics and Biology." Our objective with these workshops is to create a truly interdisciplinary forum for researchers working on outstanding problems in biology, but using different approaches (physical, chemical or biological). Generally speaking, the biologists are trained in the particular and motivated by the specifics, while, in contrast, the physicists deal with generic and "universal" models. All agree about the necessity of developing "robust" models. The specific aim of the workshop was to bridge the gap between physics and biology in the particular field of interconnected dynamical networks. The proper functioning of a living organism of any complexity requires the coordinated activity of a great number of "units." Units, or, in physical terms, degrees of freedom that couple to one another, typically form networks. The physical or biological properties of interconnected networks may drastically differ from those of the individual units: the whole is not simply an assembly of its parts, as can be demonstrated by the following examples. Above a certain (critical) concentration the metallic islands, randomly distributed in an insulating matrix, form an interconnected network. At this point the macroscopic conductivity of the system becomes finite and the amorphous metal is capable of carrying current. The value of the macroscopic conductivity typically is very different from the conductivity of the individual metallic islands.
This volume presents the theory and applications of self-organising neural network models which perform the Independent Component Analysis (ICA) transformation and Blind Source Separation (BSS). It is largely self-contained, covering the fundamental concepts of information theory, higher order statistics and information geometry. Neural models for instantaneous and temporal BSS and their adaptation algorithms are presented and studied in detail. There is also in-depth coverage of the following application areas; noise reduction, speech enhancement in noisy environments, image enhancement, feature extraction for classification, data analysis and visualisation, data mining and biomedical data analysis. Self-Organising Neural Networks will be of interest to postgraduate students and researchers in Connectionist AI, Signal Processing and Neural Networks, research and development workers, and technology development engineers and research engineers.
This is the third in a series of conferences devoted primarily to the theory and applications of artificial neural networks and genetic algorithms. The first such event was held in Innsbruck, Austria, in April 1993, the second in Ales, France, in April 1995. We are pleased to host the 1997 event in the mediaeval city of Norwich, England, and to carryon the fine tradition set by its predecessors of providing a relaxed and stimulating environment for both established and emerging researchers working in these and other, related fields. This series of conferences is unique in recognising the relation between the two main themes of artificial neural networks and genetic algorithms, each having its origin in a natural process fundamental to life on earth, and each now well established as a paradigm fundamental to continuing technological development through the solution of complex, industrial, commercial and financial problems. This is well illustrated in this volume by the numerous applications of both paradigms to new and challenging problems. The third key theme of the series, therefore, is the integration of both technologies, either through the use of the genetic algorithm to construct the most effective network architecture for the problem in hand, or, more recently, the use of neural networks as approximate fitness functions for a genetic algorithm searching for good solutions in an 'incomplete' solution space, i.e. one for which the fitness is not easily established for every possible solution instance.
Speech Processing, Recognition and Artificial Neural Networks contains papers from leading researchers and selected students, discussing the experiments, theories and perspectives of acoustic phonetics as well as the latest techniques in the field of spe ech science and technology. Topics covered in this book include; Fundamentals of Speech Analysis and Perceptron; Speech Processing; Stochastic Models for Speech; Auditory and Neural Network Models for Speech; Task-Oriented Applications of Automatic Speech Recognition and Synthesis.
This volume presents a neural network architecture for the prediction of conditional probability densities - which is vital when carrying out universal approximation on variables which are either strongly skewed or multimodal. Two alternative approaches are discussed: the GM network, in which all parameters are adapted in the training scheme, and the GM-RVFL model which draws on the random functional link net approach. Points of particular interest are: - it examines the modification to standard approaches needed for conditional probability prediction; - it provides the first real-world test results for recent theoretical findings about the relationship between generalisation performance of committees and the over-flexibility of their members; This volume will be of interest to all researchers, practitioners and postgraduate / advanced undergraduate students working on applications of neural networks - especially those related to finance and pattern recognition.
This volume will contain papers from the 5th Neural Computation and Psychology Workshop, University of Birmingham, UK, 8-10 September 1998. The theme of the workshop is Connectionist Models in Cognitive Neuroscience, a topic which covers many important issues ranging from modelling physiological structure, to cognitive function and its disorders in neuropsychological and psychiatric cases. The workshop is intended to bring together researchers from such diverse disciplines as artificial intelligence, applied mathematics, cognitive science, computer science, neurobiology, philosophy and psychology, to discuss their work on the connectionist modelling of psychology. The papers will provide a state of the art summary of ongoing research in this exciting and fast-moving field. As such this volume will provide a valuable contribution to the Perspectives in Neural Computing series.
In this monograph, new structures of neural networks in multidimensional domains are introduced. These architectures are a generalization of the Multi-layer Perceptron (MLP) in Complex, Vectorial and Hypercomplex algebra. The approximation capabilities of these networks and their learning algorithms are discussed in a multidimensional context. The work includes the theoretical basis to address the properties of such structures and the advantages introduced in system modelling, function approximation and control. Some applications, referring to attractive themes in system engineering and a MATLAB software tool, are also reported. The appropriate background for this text is a knowledge of neural networks fundamentals. The manuscript is intended as a research report, but a great effort has been performed to make the subject comprehensible to graduate students in computer engineering, control engineering, computer sciences and related disciplines.
The journey towards the autonomous enterprise has begun; there are already companies operating in a highly automated way. Every corporate decision-maker will need to understand the opportunities and risks that the autonomous enterprise presents, to learn how best to navigate the shifting competitive landscape on their journey of change. This book is your guide to this innovation, presenting the concepts in real world contexts by covering the art of the possible today and providing glimpses into the future of business.
As we move around in our environment, and interact with it, many of the most important problems we face involve the processing of spatial information. We have to be able to navigate by perceiving and remembering the locations and orientations of the objects around us relative to ourself; we have to sense and act upon these objects; and we need to move through space to position ourselves in favourable locations or to avoid dangerous ones. While this appears so simple that we don't even think about it, the difficulty of solving these problems has been shown in the repeated failure of artificial systems to perform these kinds of tasks efficiently. In contrast, humans and other animals routinely overcome these problems every single day. This book examines some of the neural substrates and mechanisms that support these remarkable abilities. The hippocampus and the parietal cortex have been implicated in various core spatial behaviours, such as the ability to localise an object and navigate to it. Damage to these areas in humans and animals leads to impairment of these spatial functions. This collection of papers, written by internationally recognized experts in the field, reviews the evidence that each area is involved in spatial cognition, examines the mechanisms underlying the generation of spatial behaviours, and considers the relative roles of the parietal and hippocampal areas, including how each interacts with the other. The papers integrate a wide range of theoretical and experimental approaches, and touch on broader issues relating to memory and imagery. As such, this book represents the state of the art of current research into the neural basis of spatial cognition. It should be of interest to anyone - researchers or graduate students - working in the areas of cognitive neuroscience, neuroanatomy, neuropsychology, and cognition generally.
This book comprises the articles of the 6th Econometric Workshop in Karlsruhe, Germany. In the first part approaches from traditional econometrics and innovative methods from machine learning such as neural nets are applied to financial issues. Neural Networks are successfully applied to different areas such as debtor analysis, forecasting and corporate finance. In the second part various aspects from Value-at-Risk are discussed. The proceedings describe the legal framework, review the basics and discuss new approaches such as shortfall measures and credit risk.
Concepts for Neural Networks - A Survey provides a wide-ranging survey of concepts relating to the study of neural networks. It includes chapters explaining the basics of both artificial neural networks and the mathematics of neural networks, as well as chapters covering the more philosophical background to the topic and consciousness. There is also significant emphasis on the practical use of the techniques described in the area of robotics. Containing contributions from some of the world's leading specialists in their fields (including Dr. Ton Coolen and Professor Igor Aleksander), this volume will provide the reader with a good, general introduction to the basic concepts needed to understan d and use neural network technology.
Providing an in-depth treatment of neural network models, this volume explains and proves the main results in a clear and accessible way. It presents the essential principles of nonlinear dynamics as derived from neurobiology, and investigates the stability, convergence behaviour and capacity of networks. Also included are sections on stochastic networks and simulated annealing, presented using Markov processes rather than statistical physics, and a chapter on backpropagation. Each chapter ends with a suggested project designed to help the reader develop an integrated knowledge of the theory, placing it within a practical application domain. Neural Network Models: Theory and Projects concentrates on the essential parameters and results that will enable the reader to design hardware or software implementations of neural networks and to assess critically existing commercial products.
This book constitutes the refereed proceedings of the International
Workshop on Energy Minimization Methods in Computer Vision and
Pattern Recognition, EMMCVPR'97, held in Venice, Italy, in May
1997.
About This Book This book is about training methods - in particular, fast second-order training methods - for multi-layer perceptrons (MLPs). MLPs (also known as feed-forward neural networks) are the most widely-used class of neural network. Over the past decade MLPs have achieved increasing popularity among scientists, engineers and other professionals as tools for tackling a wide variety of information processing tasks. In common with all neural networks, MLPsare trained (rather than programmed) to carryout the chosen information processing function. Unfortunately, the (traditional' method for trainingMLPs- the well-knownbackpropagation method - is notoriously slow and unreliable when applied to many prac tical tasks. The development of fast and reliable training algorithms for MLPsis one of the most important areas ofresearch within the entire field of neural computing. The main purpose of this book is to bring to a wider audience a range of alternative methods for training MLPs, methods which have proved orders of magnitude faster than backpropagation when applied to many training tasks. The book also addresses the well-known (local minima' problem, and explains ways in which fast training methods can be com bined with strategies for avoiding (or escaping from) local minima. All the methods described in this book have a strong theoretical foundation, drawing on such diverse mathematical fields as classical optimisation theory, homotopic theory and stochastic approximation theory.
This volume collects together refereed versions of twenty-five papers presented at the 4th Neural Computation and Psychology Workshop, held at University College London in April 1997. The "NCPW" workshop series is now well established as a lively forum which brings together researchers from such diverse disciplines as artificial intelligence, mathematics, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on connectionist modelling in psychology. The general theme of this fourth workshop in the series was "Connectionist Repre sentations," a topic which not only attracted participants from all these fields, but from allover the world as well. From the point of view of the conference organisers focusing on representational issues had the advantage that it immediately involved researchers from all branches of neural computation. Being so central both to psychology and to connectionist modelling, it is one area about which everyone in the field has their own strong views, and the diversity and quality of the presentations and, just as importantly, the discussion which followed them, certainly attested to this."
This book constitutes the refereed proceedings of the 6th
International Conference on Evolutionary Programming, EP 97, held
in Indianapolis, IN, USA, in April 1997.
A fundamental objective of Artificial Intelligence (AI) is the creation of in telligent computer programs. In more modest terms AI is simply con cerned with expanding the repertoire of computer applications into new domains and to new levels of efficiency. The motivation for this effort comes from many sources. At a practical level there is always a demand for achieving things in more efficient ways. Equally, there is the technical challenge of building programs that allow a machine to do something a machine has never done before. Both of these desires are contained within AI and both provide the inspirational force behind its development. In terms of satisfying both of these desires there can be no better example than machine learning. Machines that can learn have an in-built effi ciency. The same software can be applied in many applications and in many circumstances. The machine can adapt its behaviour so as to meet the demands of new, or changing, environments without the need for costly re-programming. In addition, a machine that can learn can be ap plied in new domains with the genuine potential for innovation. In this sense a machine that can learn can be applied in areas where little is known about possible causal relationships, and even in circumstances where causal relationships are judged not to exist. This last aspect is of major significance when considering machine learning as applied to fi nancial forecasting."
In almost all areas of science and engineering, the use of computers and microcomputers has, in recent years, transformed entire subject areas. What was not even considered possible a decade or two ago is now not only possible but is also part of everyday practice. As a result, a new approach usually needs to be taken (in order) to get the best out of a situation. What is required is now a computer's eye view of the world. However, all is not rosy in this new world. Humans tend to think in two or three dimensions at most, whereas computers can, without complaint, work in n dimensions, where n, in practice, gets bigger and bigger each year. As a result of this, more complex problem solutions are being attempted, whether or not the problems themselves are inherently complex. If information is available, it might as well be used, but what can be done with it? Straightforward, traditional computational solutions to this new problem of complexity can, and usually do, produce very unsatisfactory, unreliable and even unworkable results. Recently however, artificial neural networks, which have been found to be very versatile and powerful when dealing with difficulties such as nonlinearities, multivariate systems and high data content, have shown their strengths in general in dealing with complex problems. This volume brings together a collection of top researchers from around the world, in the field of artificial neural networks."
This publication deals with the application of advanced digital signal processing techniques and neural networks to various telecommunication problems. The editor presents the latest research results in areas such as arrays, mobile channels, acoustic echo cancellation, speech coding and adaptive filtering in varying environments.
Evolutionary Learning Algorithms for Neural Adaptive Control is an advanced textbook, which investigates how neural networks and genetic algorithms can be applied to difficult adaptive control problems which conventional results are either unable to solve , or for which they can not provide satisfactory results. It focuses on the principles involved, rather than on the modelling of the applications themselves, and therefore provides the reader with a good introduction to the fundamental issues involved.
This book includes a selection of twelve carefully revised papers
chosen from the papers accepted for presentation at the 4th
IEEE/Nagoya-University World Wisepersons Workshop held in Nagoya in
November 1995.
Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence. |
![]() ![]() You may like...
Educational and Social Dimensions of…
Paula Peres, Fernando Moreira, …
Hardcover
R5,019
Discovery Miles 50 190
Global Education and the Impact of…
Maria Jose Loureiro, Ana Loureiro, …
Hardcover
R7,211
Discovery Miles 72 110
Online Learning and Assessment in Higher…
Robyn Benson, Charlotte Brack
Paperback
R1,648
Discovery Miles 16 480
Mathematics for Young Learners - A Guide…
Rosalind Charlesworth, Karen Lind, …
Paperback
R858
Discovery Miles 8 580
Measurement Methodologies to Assess the…
Pedro Isaias, Tomayess Issa, …
Hardcover
R5,784
Discovery Miles 57 840
Gamification in Education…
Information Resources Management Association
Hardcover
R9,175
Discovery Miles 91 750
|