![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregation. Feedback is a dominant feature of the structural organization of the brain. Recurrent neural networks have been studied extensively in the physical literature, starting with the ground breaking work of John Hop field (1982)."
Neural networks have a learning capability but analysis of a trained network is difficult. On the other hand, extraction of fuzzy rules is difficult but once they have been extracted, it is relatively easy to analyze the fuzzy system. This book solves the above problems by developing new learning paradigms and architectures for neural networks and fuzzy systems.The book consists of two parts: Pattern Classification and Function Approximation. In the first part, based on the synthesis principle of the neural-network classifier: A new learning paradigm is discussed and classification performance and training time of the new paradigm for several real-world data sets are compared with those of the widely-used back-propagation algorithm; Fuzzy classifiers of different architectures based on fuzzy rules can be defined with hyperbox, polyhedral, or ellipsoidal regions. The book discusses the unified approach for training these fuzzy classifiers; The performance of the newly-developed fuzzy classifiers and the conventional classifiers such as nearest-neighbor classifiers and support vector machines are evaluated using several real-world data sets and their advantages and disadvantages are clarified.In the second part: Function approximation is discussed extending the discussions in the first part; Performance of the function approximators is compared.This book is aimed primarily at researchers and practitioners in the field of artificial intelligence and neural networks.
Human Face Recognition Using Third-Order Synthetic Neural Networks explores the viability of the application of High-order synthetic neural network technology to transformation-invariant recognition of complex visual patterns. High-order networks require little training data (hence, short training times) and have been used to perform transformation-invariant recognition of relatively simple visual patterns, achieving very high recognition rates. The successful results of these methods provided inspiration to address more practical problems which have grayscale as opposed to binary patterns (e.g., alphanumeric characters, aircraft silhouettes) and are also more complex in nature as opposed to purely edge-extracted images - human face recognition is such a problem. Human Face Recognition Using Third-Order Synthetic Neural Networks serves as an excellent reference for researchers and professionals working on applying neural network technology to the recognition of complex visual patterns.
Computer vision-based crack-like object detection has many useful applications, such as pavement surface inspection, underground pipeline inspection, bridge cracking monitoring, railway track assessment, etc. However, in most contexts, cracks appear as thin, irregular long-narrow objects, and often are buried into complex, textured background with high diversity which make the crack detection very challenging. During the past a few years, the deep learning technique has achieved great success and has been utilized for solving a variety of object detection problems. However, using deep learning for accurate crack localization is non-trivial. This book discusses crack-like object detection problem in a comprehensive way. It starts by discussing traditional image processing approaches for solving this problem, and then introduces deep learning-based methods. The book provides a comprehensive review of object detection problems and focuses on the most challenging problem, crack-like object detection, to dig deep into the deep learning method. It includes examples of real-world problems, which are easy to understand and could be a good tutorial for introducing computer vision and machine learning.
"Takagi-Sugeno Fuzzy Systems Non-fragile H-infinity Filtering"
investigates the problem of non-fragile H-infinity filter design
for Takagi-Sugeno (T-S) fuzzy systems. Given a T-S fuzzy system,
the objective of this book is to design an H-infinity filter with
the gain variations such that the filtering error system guarantees
a prescribed H-infinity performance level. Furthermore, it
demonstrates that the solution of non-fragile H-infinity filter
design problem can be obtained by solving a set of linear matrix
inequalities (LMIs).
This book and sofwtare package provide a complement to the traditional data analysis tools already widely available. It presents an introduction to the analysis of data using neural networks. Neural network functions discussed include multilayer feed-forward networks using error back propagation, genetic algorithm-neural network hybrids, generalized regression neural networks, learning quantizer networks, and self-organizing feature maps. In an easy-to-use, Windows-based environment it offers a wide range of data analytic tools which are not usually found together: these include genetic algorithms, probabilistic networks, as well as a number of related techniques that support these - notably, fractal dimension analysis, coherence analysis, and mutual information analysis. The text presents a number of worked examples and case studies using Simulnet, the software package which comes with the book. Readers are assumed to have a basic understanding of computers and elementary mathematics. With this background, a reader will find themselves quickly conducting sophisticated hands-on analyses of data sets.
Papers comprising this volume were presented at the first IEEE Conference on [title] held in Denver, Co., Nov. 1987. As the limits of the digital computer become apparent, interest in neural networks has intensified. Ninety contributions discuss what neural networks can do, addressing topics that in
Micromechanical manufacturing based on microequipment creates new possibi- ties in goods production. If microequipment sizes are comparable to the sizes of the microdevices to be produced, it is possible to decrease the cost of production drastically. The main components of the production cost - material, energy, space consumption, equipment, and maintenance - decrease with the scaling down of equipment sizes. To obtain really inexpensive production, labor costs must be reduced to almost zero. For this purpose, fully automated microfactories will be developed. To create fully automated microfactories, we propose using arti?cial neural networks having different structures. The simplest perceptron-like neural network can be used at the lowest levels of microfactory control systems. Adaptive Critic Design, based on neural network models of the microfactory objects, can be used for manufacturing process optimization, while associative-projective neural n- works and networks like ART could be used for the highest levels of control systems. We have examined the performance of different neural networks in traditional image recognition tasks and in problems that appear in micromechanical manufacturing. We and our colleagues also have developed an approach to mic- equipment creation in the form of sequential generations. Each subsequent gene- tion must be of a smaller size than the previous ones and must be made by previous generations. Prototypes of ?rst-generation microequipment have been developed and assessed.
Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks.
NVIDIA's Full-Color Guide to Deep Learning: All You Need to Get Started and Get Results "To enable everyone to be part of this historic revolution requires the democratization of AI knowledge and resources. This book is timely and relevant towards accomplishing these lofty goals." -- From the foreword by Dr. Anima Anandkumar, Bren Professor, Caltech, and Director of ML Research, NVIDIA "Ekman uses a learning technique that in our experience has proven pivotal to success-asking the reader to think about using DL techniques in practice. His straightforward approach is refreshing, and he permits the reader to dream, just a bit, about where DL may yet take us." -- From the foreword by Dr. Craig Clawson, Director, NVIDIA Deep Learning Institute Deep learning (DL) is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience. After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Magnus Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains how a natural language translator and a system generating natural language descriptions of images. Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning. Explore and master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation See how DL frameworks make it easier to develop more complicated and useful neural networks Discover how convolutional neural networks (CNNs) revolutionize image classification and analysis Apply recurrent neural networks (RNNs) and long short-term memory (LSTM) to text and other variable-length sequences Master NLP with sequence-to-sequence networks and the Transformer architecture Build applications for natural language translation and image captioning NVIDIA's invention of the GPU sparked the PC gaming market. The company's pioneering work in accelerated computing--a supercharged form of computing at the intersection of computer graphics, high-performance computing, and AI--is reshaping trillion-dollar industries, such as transportation, healthcare, and manufacturing, and fueling the growth of many others. Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
This book offers a new, theoretical approach to information dynamics, i.e., information processing in complex dynamical systems. The presentation establishes a consistent theoretical framework for the problem of discovering knowledge behind empirical, dynamical data and addresses applications in information processing and coding in dynamical systems. This will be an essential reference for those in neural computing, information theory, nonlinear dynamics and complex systems modeling.
In recent years there has been tremendous activity in computational neuroscience resulting from two parallel developments. On the one hand, our knowledge of real nervous systems has increased dramatically over the years; on the other, there is now enough computing power available to perform realistic simulations of actual neural circuits. This is leading to a revolution in quantitative neuroscience, which is attracting a growing number of scientists from non-biological disciplines. These scientists bring with them expertise in signal processing, information theory, and dynamical systems theory that has helped transform our ways of approaching neural systems. New developments in experimental techniques have enabled biologists to gather the data necessary to test these new theories. While we do not yet understand how the brain sees, hears or smells, we do have testable models of specific components of visual, auditory, and olfactory processing. Some of these models have been applied to help construct artificial vision and hearing systems. Similarly, our understanding of motor control has grown to the point where it has become a useful guide in the development of artificial robots. Many neuroscientists believe that we have only scratched the surface, and that a more complete understanding of biological information processing is likely to lead to technologies whose impact will propel another industrial revolution. Neural Systems: Analysis and Modeling contains the collected papers of the 1991 Conference on Analysis and Modeling of Neural Systems (AMNS), and the papers presented at the satellite symposium on compartmental modeling, held July 23-26, 1992, in San Francisco, California. The papers included, present an update of the most recent developments in quantitative analysis and modeling techniques for the study of neural systems.
The theoretical foundations of Neural Networks and Analog Computation conceptualize neural networks as a particular type of computer consisting of multiple assemblies of basic processors interconnected in an intricate structure. Examining these networks under various resource constraints reveals a continuum of computational devices, several of which coincide with well-known classical models. On a mathematical level, the treatment of neural computations enriches the theory of computation but also explicated the computational complexity associated with biological networks, adaptive engineering tools, and related models from the fields of control theory and nonlinear dynamics. The material in this book will be of interest to researchers in a variety of engineering and applied sciences disciplines. In addition, the work may provide the base of a graduate-level seminar in neural networks for computer science students.
International Conference Intelligent Network and Intelligence in Networks (2IN97) French Ministry of Telecommunication, 20 Avenue de Segur, Paris -France September 2-5, 1997 Organizer: IFIP WG 6.7 -Intelligent Networks Sponsorship: IEEE, Alcatel, Ericsson, France Telecom, Nokia, Nordic Teleoperators, Siemens, Telecom Finland, Lab. PRiSM Aim of the conference To identify and study current issues related to the development of intelligent capabilities in networks. These issues include the development and distribution of services in broadband and mobile networks. This conference belongs to a series of IFIP conference on Intelligent Network. The first one took place in Lappeeranta August 94, the second one in Copenhagen, August 95. The proceedings of both events have been published by Chapman&Hall. IFIP Working Group 6.7 on IN has concentrated with the research and development of Intelligent Networks architectures. First the activities have concentrated in service creation, service management, database issues, feature interaction, IN performance and advanced signalling for broadband services. Later on the research activities have turned towards the distribution of intelligence in networks and IN applications to multimedia and mobility. The market issues of new services have also been studied. From the system development point of view, topics from OMG and TINA-C have been considered.
Artificial neural networks are used to model systems that receive inputs and produce outputs. The relationships between the inputs and outputs and the representation parameters are critical issues in the design of related engineering systems, and sensitivity analysis concerns methods for analyzing these relationships. Perturbations of neural networks are caused by machine imprecision, and they can be simulated by embedding disturbances in the original inputs or connection weights, allowing us to study the characteristics of a function under small perturbations of its parameters. This is the first book to present a systematic description of sensitivity analysis methods for artificial neural networks. It covers sensitivity analysis of multilayer perceptron neural networks and radial basis function neural networks, two widely used models in the machine learning field. The authors examine the applications of such analysis in tasks such as feature selection, sample reduction, and network optimization. The book will be useful for engineers applying neural network sensitivity analysis to solve practical problems, and for researchers interested in foundational problems in neural networks.
The purpose of this monograph is to give the broad aspects of nonlinear identification and control using neural networks. It consists of three parts:- an introduction to the fundamental principles of neural networks;- several methods for nonlinear identification using neural networks;- various techniques for nonlinear control using neural networks.A number of simulated and industrial examples are used throughout the monograph to demonstrate the operation of nonlinear identification and control techniques using neural networks. It should be emphasised that the methods and systems of nonlinear control have not progressed as rapidly as those for linear control. Comparatively speaking, at the present time, they are still in the development stage. We believe that the fundamental theory, various design methods and techniques, and several applications of nonlinear identification and control using neural networks that are presented in this monograph will enable the reader to analyse and synthesise nonlinear control systems quantitatively.
Deep learning includes a subset of machine learning for processing the unsupervised data with artificial neural network functions. The major advantage of deep learning is to process big data analytics for better analysis and self-adaptive algorithms to handle more data. When applied to engineering, deep learning can have a great impact on the decision-making process. Deep Learning Applications and Intelligent Decision Making in Engineering is a pivotal reference source that provides practical applications of deep learning to improve decision-making methods and construct smart environments. Highlighting topics such as smart transportation, e-commerce, and cyber physical systems, this book is ideally designed for engineers, computer scientists, programmers, software engineers, research scholars, IT professionals, academicians, and postgraduate students seeking current research on the implementation of automation and deep learning in various engineering disciplines.
This book provides a technical approach to a Business Resilience System with its Risk Atom and Processing Data Point based on fuzzy logic and cloud computation in real time. Its purpose and objectives define a clear set of expectations for Organizations and Enterprises so their network system and supply chain are totally resilient and protected against cyber-attacks, manmade threats, and natural disasters. These enterprises include financial, organizational, homeland security, and supply chain operations with multi-point manufacturing across the world. Market shares and marketing advantages are expected to result from the implementation of the system. The collected information and defined objectives form the basis to monitor and analyze the data through cloud computation, and will guarantee the success of their survivability's against any unexpected threats. This book will be useful for advanced undergraduate and graduate students in the field of computer engineering, engineers that work for manufacturing companies, business analysts in retail and e-Commerce, and those working in the defense industry, Information Security, and Information Technology.
Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered.
Computational neuroscience is best defined by its focus on understanding the nervous systems as a computational device rather than by a particular experimental technique. Accordinlgy, while the majority of the papers in this book describe analysis and modeling efforts, other papers describe the results of new biological experiments explicitly placed in the context of computational issues. The distribution of subjects in Computation and Neural Systems reflects the current state of the field. In addition to the scientific results presented here, numerous papers also describe the ongoing technical developments that are critical for the continued growth of computational neuroscience. Computation and Neural Systems includes papers presented at the First Annual Computation and Neural Systems meeting held in San Francisco, CA, July 26--29, 1992.
The advent of the computer age has set in motion a profound shift in our perception of science -its structure, its aims and its evolution. Traditionally, the principal domains of science were, and are, considered to be mathe matics, physics, chemistry, biology, astronomy and related disciplines. But today, and to an increasing extent, scientific progress is being driven by a quest for machine intelligence - for systems which possess a high MIQ (Machine IQ) and can perform a wide variety of physical and mental tasks with minimal human intervention. The role model for intelligent systems is the human mind. The influ ence of the human mind as a role model is clearly visible in the methodolo gies which have emerged, mainly during the past two decades, for the con ception, design and utilization of intelligent systems. At the center of these methodologies are fuzzy logic (FL); neurocomputing (NC); evolutionary computing (EC); probabilistic computing (PC); chaotic computing (CC); and machine learning (ML). Collectively, these methodologies constitute what is called soft computing (SC). In this perspective, soft computing is basically a coalition of methodologies which collectively provide a body of concepts and techniques for automation of reasoning and decision-making in an environment of imprecision, uncertainty and partial truth."
This book is an essential contribution to the description of fuzziness in information systems. Usually users want to retrieve data or summarized information from a database and are interested in classifying it or building rule-based systems on it. But they are often not aware of the nature of this data and/or are unable to determine clear search criteria. The book examines theoretical and practical approaches to fuzziness in information systems based on statistical data related to territorial units. Chapter 1 discusses the theory of fuzzy sets and fuzzy logic to enable readers to understand the information presented in the book. Chapter 2 is devoted to flexible queries and includes issues like constructing fuzzy sets for query conditions, and aggregation operators for commutative and non-commutative conditions, while Chapter 3 focuses on linguistic summaries. Chapter 4 presents fuzzy logic control architecture adjusted specifically for the aims of business and governmental agencies, and shows fuzzy rules and procedures for solving inference tasks. Chapter 5 covers the fuzzification of classical relational databases with an emphasis on storing fuzzy data in classical relational databases in such a way that existing data and normal forms are not affected. This book also examines practical aspects of user-friendly interfaces for storing, updating, querying and summarizing. Lastly, Chapter 6 briefly discusses possible integration of fuzzy queries, summarization and inference related to crisp and fuzzy databases. The main target audience of the book is researchers and students working in the fields of data analysis, database design and business intelligence. As it does not go too deeply into the foundation and mathematical theory of fuzzy logic and relational algebra, it is also of interest to advanced professionals developing tailored applications based on fuzzy sets.
This volume reports the proceedings of the 15th Italian Workshop on Neural Nets WIRN04. The workshop, held in Perugia from September 14th to 17th 2004 has been jointly organized by the International Institute for Advanced Scienti?c Studies "Eduardo R. Caianiello" (IIASS) and the Societ' a Italiana Reti Neuroniche (SIREN). This year the Conference has constituted a joint event of three associations: Associazione Italiana per l'Intelligenza Arti?ciale (AIIA), Gruppo Italiano di Ricercatori in Pattern Recognition (GIRPR), Societ' a Italiana Reti Neuroniche (SIREN) within the conference CISI-04 (Conferenza Italiana sui Sistemi Int- ligenti - 2004) combining the three associations' annual meetings. The aim was to examine Intelligent Systems as a joint topic, pointing out synergies and d- ferences between the various approaches. The volume covers this matter from the Neural Networks and related ?elds perspective. It contains invited review papers and selected original contri- tions presented in either oral or poster sessions by both Italian and foreign - searchers. The contributions have been assembled, for reading convenience, into ?ve sections. The ?rst two collect papers from pre-WIRN workshops focused on Computational Intelligence Methods for Bioinformatics and Biostatistics, and Computational Intelligence on Hardware, respectively. The remaining sections concern Architectures and Algorithms, Models, and Applications. The Editors would like to thank the invited speakers and all the contributors whosehighlyquali?edpapershelpedthesuccessoftheWorkshop.Finally,special thanks go to the referees for their accurate work.
This book deals with expert evaluation models in the form of semantic spaces with completeness and orthogonality properties (complete orthogonal semantic spaces). Theoretical and practical studies of some researchers have shown that these spaces describe expert evaluations most adequately, and as a result they were often included in more sophisticated models of intellectual systems for decision making and data analysis. Methods for constructing expert evaluation models of characteristics, comparative analysis of these models, studies of structural composition of their sets and constructing of generalized models are described. Models to obtain rating points for objects and groups of objects with qualitative and quantitative characteristics are presented. A number of regression models combining elements of classical and fuzzy regressions are presented. All methods and models developed by the authors and described in the book are illustrated with examples from various fields of human activities. This book meant for scientists in the field of computer science, expert systems, artificial intelligence and decision making; and also for engineers, post-graduate students and students who study the fuzzy set theory and its applications.
This volume includes papers presented at the Third Annual Computation and Neural Systems meeting (CNS*94) held in Monterey California, July 21 - July 26, 1994. This collection includes 71 of the more than 100 papers presented at this year's meeting. Acceptance for meeting presentation was based on the peer review of preliminary papers by at least two referees. The papers in this volume were submitted in final form after the meeting. As represented by this volume, CNS meetings continue to expand in quality, size and breadth of focus as increasing numbers of neuroscientists are taking a computational approach to understanding nervous system function. The CNS meetings are intended to showcase the best of current research in computational neuroscience. As such the meeting is fundamentally focused on understanding the relationship between the structure of neIVOUS systems and their function. What is clear from the continued expansion of the CNS meetings is that computational approaches are increasingly being applied at all levels of neurobiological analysis. in an ever growing number of experimental preparations. and neural subsystems. Thus. experimental subjects range from crickets to primates; sensory systems range from vision to electroreception; experimental approaches range from realistic models of ion channels to the analysis of the information content of spike trains. For this reason, the eNS meetings represent an opportunity for computational neurobiologists to consider their research results in a much broader context than is usually possible. |
![]() ![]() You may like...
Deep Natural Language Processing and AI…
Poonam Tanwar, Arti Saxena, …
Hardcover
R7,211
Discovery Miles 72 110
Adex Optimized Adaptive Controllers and…
Juan M. Martin-Sanchez, Jose Rodellar
Hardcover
R4,145
Discovery Miles 41 450
Humanizing Digital Reality - Design…
Klaas de Rycke, Christoph Gengnagel, …
Hardcover
R4,566
Discovery Miles 45 660
Metalanguages for Dissecting Translation…
Rei Miyata, Masaru Yamada, …
Hardcover
R4,484
Discovery Miles 44 840
Trajectories through Knowledge Space - A…
Lawrence A. Bookman
Hardcover
R5,746
Discovery Miles 57 460
|