Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
The 1990 Grainger Lectures delivered at the University of Illinois, Urbana-Champaign, September 28 - October 1, 1990 were devoted to a critical reexamination of the foundations of adaptive control. In this volume the lectures are expanded by most recent developments and solutions for some long-standing open problems. Concepts and approaches presented are both novel and of fundamental importance for adaptive control research in the 1990s. The papers in Part I present unifications, reappraisals and new results on tunability, convergence and robustness of adaptive linear control, whereas the papers in Part II formulate new problems in adaptive control of nonlinear systems and solve them without any linear constraints imposed on the nonlinearities.
In this book a global shape model is developed and applied to the analysis of real pictures acquired with a visible light camera under varying conditions of optical degradation. Computational feasibility of the algorithms derived from this model is achieved by analytical means. The aim is to develop methods for image understanding based on structured restoration, for example automatic detection of abnormalities. We also want to find the limits of applicability of the algorithms. This is done by making the optical degradations more and more severe until the algorithms no longer succeed in their task. This computer experiment in pattern theory is one of several. The others, LEAVES, X-RAYS, and RANGE are described elsewhere. This book is suitable for an advanced undergraduate or graduate seminar in pattern theory, or as an accompanying book for applied probability, computer vision, or pattern recognition.
1. 1 The problem and the approach The model developed here, which is actually more a collection of com ponents than a single monolithic structure, traces a path from relatively low-level neural/connectionistic structures and processes to relatively high-level animal/artificial intelligence behaviors. Incremental extension of this initial path permits increasingly sophisticated representation and processing strategies, and consequently increasingly sophisticated behavior. The initial chapters develop the basic components of the sys tem at the node and network level, with the general goal of efficient category learning and representation. The later chapters are more con cerned with the problems of assembling sequences of actions in order to achieve a given goal state. The model is referred to as connectionistic rather than neural, be cause, while the basic components are neuron-like, there is only limited commitment to physiological realism. Consequently the neuron-like ele ments are referred to as "nodes" rather than "neurons." The model is directed more at the behavioral level, and at that level, numerous con cepts from animal learning theory are directly applicable to connectionis tic modeling. An attempt to actually implement these behavioral theories in a computer simulation can be quite informative, as most are only partially specified, and the gaps may be apparent only when actual ly building a functioning system. In addition, a computer implementa tion provides an improved capability to explore the strengths and limita tions of the different approaches as well as their various interactions."
This book is a comprehensive introduction to the neural network models currently under intensive study for computational applications. It is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest.
The soft cover study edition now available is a revised reprint of the successful first edition of 1988. It collects invited presentations of an Advanced Research Workshop on "Neural Computers," held in Neuss, Federal Republic of Germany, September 28 - October 2, 1987. The objectives of the workshop were - to promote international collaboration among scientists from the fields of Neuroscience, Computational Neuroscience, Cellular Automata, Artificial Intelligence, and Computer Design; and - to review our present knowledge of brain research and novel computers with neural network architecture. The workshop assembled some fifty invited experts from Europe, America and Japan representing the relevant fields. The book describes the transfer of concepts of brain function and brain architecture to the design of self-organizing computers with neural network architecture. The contributions cover a wide range of topics, including Neural Network Architecture, Learning and Memory, Fault Tolerance, Pattern Recognition, and Motor Control in Brains Versus Neural Computers. Twelve of the contributions are review papers. In addition, group reports summarize the discussions regarding four specific topics relevant to the state of the art in neural computers. With its extensive reference list as well as its subject and name indexes this volume will serve as a reference book for future research in the field of Neural Computers.
This is an exciting time. The study of neural networks is enjoying a great renaissance, both in computational neuroscience - the development of information processing models of living brains - and in neural computing - the use of neurally inspired concepts in the construction of "intelligent" machines. Thus the title of this volume, Dynamic Interactions in Neural Networks: Models and Data can be given two interpretations. We present models and data on the dynamic interactions occurring in the brain, and we also exhibit the dynamic interactions between research in computational neuroscience and in neural computing, as scientists seek to find common principles that may guide us in the understanding of our own brains and in the design of artificial neural networks. In fact, the book title has yet a third interpretation. It is based on the U. S. -Japan Seminar on "Competition and Cooperation in Neural Nets" which we organized at the University of Southern California, Los Angeles, May 18-22, 1987, and is thus the record of interaction of scientists on both sides of the Pacific in advancing the frontiers of this dynamic, re-born field. The book focuses on three major aspects of neural network function: learning, perception, and action. More specifically, the chapters are grouped under three headings: "Development and Learning in Adaptive Networks," "Visual Function," and "Motor Control and the Cerebellum.
In today's data-driven world, more sophisticated algorithms for data processing are in high demand, mainly when the data cannot be handled with the help of traditional techniques. Self-learning and adaptive algorithms are now widely used by such leading giants that as Google, Tesla, Microsoft, and Facebook in their projects and applications. In this guide designed for researchers and students of computer science, readers will find a resource for how to apply methods that work on real-life problems to their challenging applications, and a go-to work that makes fuzzy clustering issues and aspects clear. Including research relevant to those studying cybernetics, applied mathematics, statistics, engineering, and bioinformatics who are working in the areas of machine learning, artificial intelligence, complex system modeling and analysis, neural networks, and optimization, this is an ideal read for anyone interested in learning more about the fascinating new developments in machine learning.
Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks.
The utility of artificial neural network models lies in the fact that they can be used to infer functions from observations-making them especially useful in applications where the complexity of data or tasks makes the design of such functions by hand impractical. Exploring Neural Networks with C# presents the important properties of neural networks-while keeping the complex mathematics to a minimum. Explaining how to build and use neural networks, it presents complicated information about neural networks structure, functioning, and learning in a manner that is easy to understand. Taking a "learn by doing" approach, the book is filled with illustrations to guide you through the mystery of neural networks. Examples of experiments are provided in the text to encourage individual research. Online access to C# programs is also provided to help you discover the properties of neural networks. Following the procedures and using the programs included with the book will allow you to learn how to work with neural networks and evaluate your progress. You can download the programs as both executable applications and C# source code from http://home.agh.edu.pl/~tad//index.php?page=programy&lang=en
Deep Learning Neural Networks is the fastest growing field in machine learning. It serves as a powerful computational tool for solving prediction, decision, diagnosis, detection and decision problems based on a well-defined computational architecture. It has been successfully applied to a broad field of applications ranging from computer security, speech recognition, image and video recognition to industrial fault detection, medical diagnostics and finance.This comprehensive textbook is the first in the new emerging field. Numerous case studies are succinctly demonstrated in the text. It is intended for use as a one-semester graduate-level university text and as a textbook for research and development establishments in industry, medicine and financial research.
This book provides a starting point for software professionals to apply artificial neural networks for software reliability prediction without having analyst capability and expertise in various ANN architectures and their optimization. Artificial neural network (ANN) has proven to be a universal approximator for any non-linear continuous function with arbitrary accuracy. This book presents how to apply ANN to measure various software reliability indicators: number of failures in a given time, time between successive failures, fault-prone modules and development efforts. The application of machine learning algorithm i.e. artificial neural networks application in software reliability prediction during testing phase as well as early phases of software development process are presented. Applications of artificial neural network for the above purposes are discussed with experimental results in this book so that practitioners can easily use ANN models for predicting software reliability indicators.
Covering a wide range of notions concerning hesitant fuzzy set and its extensions, this book provides a comprehensive reference to the topic. In the case where different sources of vagueness appear simultaneously, the concept of fuzzy set is not able to properly model the uncertainty, imprecise and vague information. In order to overcome such a limitation, different types of fuzzy extension have been introduced so far. Among them, hesitant fuzzy set was first introduced in 2010, and the existing extensions of hesitant fuzzy set have been encountering an increasing interest and attracting more and more attentions up to now. It is not an exaggeration to say that the recent decade has seen the blossoming of a larger set of techniques and theoretical outcomes for hesitant fuzzy set together with its extensions as well as applications.As the research has moved beyond its infancy, and now it is entering a maturing phase with increased numbers and types of extensions, this book aims to give a comprehensive review of such researches. Presenting the review of many and important types of hesitant fuzzy extensions, and including references to a large number of related publications, this book will serve as a useful reference book for researchers in this field.
A First Course in Fuzzy Logic, Fourth Edition is an expanded version of the successful third edition. It provides a comprehensive introduction to the theory and applications of fuzzy logic. This popular text offers a firm mathematical basis for the calculus of fuzzy concepts necessary for designing intelligent systems and a solid background for readers to pursue further studies and real-world applications. New in the Fourth Edition: Features new results on fuzzy sets of type-2 Provides more information on copulas for modeling dependence structures Includes quantum probability for uncertainty modeling in social sciences, especially in economics With its comprehensive updates, this new edition presents all the background necessary for students, instructors and professionals to begin using fuzzy logic in its many-applications in computer science, mathematics, statistics, and engineering. About the Authors: Hung T. Nguyen is a Professor Emeritus at the Department of Mathematical Sciences, New Mexico State University. He is also an Adjunct Professor of Economics at Chiang Mai University, Thailand. Carol L. Walker is also a Professor Emeritus at the Department of Mathematical Sciences, New Mexico State University. Elbert A. Walker is a Professor Emeritus, Department of Mathematical Sciences, New Mexico State University.
This monograph provides and explains the probability theory of geometric graphs. Applications of the theory include communications networks, classification, spatial statistics, epidemiology, astrophysics and neural networks.
Deep learning from the ground up using R and the powerful Keras library! In Deep Learning with R, Second Edition you will learn: Deep learning from first principles Image classification and image segmentation Time series forecasting Text classification and machine translation Text generation, neural style transfer, and image generation Deep Learning with R, Second Edition shows you how to put deep learning into action. It's based on the revised new edition of Francois Chollet's bestselling Deep Learning with Python. All code and examples have been expertly translated to the R language by Tomasz Kalinowski, who maintains the Keras and Tensorflow R packages at RStudio. Novices and experienced ML practitioners will love the expert insights, practical techniques, and important theory for building neural networks. about the technology Deep learning has become essential knowledge for data scientists, researchers, and software developers. The R language APIs for Keras and TensorFlow put deep learning within reach for all R users, even if they have no experience with advanced machine learning or neural networks. This book shows you how to get started on core DL tasks like computer vision, natural language processing, and more using R. what's inside Image classification and image segmentation Time series forecasting Text classification and machine translation Text generation, neural style transfer, and image generation about the reader For readers with intermediate R skills. No previous experience with Keras, TensorFlow, or deep learning is required.
Artificial neural networks can be employed to solve a wide spectrum of problems in optimization, parallel computing, matrix algebra and signal processing. Taking a computational approach, this book explains how ANNs provide solutions in real time, and allow the visualization and development of new techniques and architectures. Features include a guide to the fundamental mathematics of neurocomputing, a review of neural network models and an analysis of their associated algorithms, and state-of-the-art procedures to solve optimization problems. Computer simulation programs MATLAB, TUTSIM and SPICE illustrate the validity and performance of the algorithms and architectures described. The authors encourage the reader to be creative in visualizing new approaches and detail how other specialized computer programs can evaluate performance. Each chapter concludes with a short bibliography. Illustrative worked examples, questions and problems assist self study. The authors' self-contained approach will appeal to a wide range of readers, including professional engineers working in computing, optimization, operational research, systems identification and control theory. Undergraduate and postgraduate students in computer science, electrical and electronic engineering will also find this text invaluable. In particular, the text will be ideal to supplement courses in circuit analysis and design, adaptive systems, control systems, signal processing and parallel computing.
New technologies in engineering, physics and biomedicine are demanding increasingly complex methods of digital signal processing. By presenting the latest research work the authors demonstrate how real-time recurrent neural networks (RNNs) can be implemented to expand the range of traditional signal processing techniques and to help combat the problem of prediction. Within this text neural networks are considered as massively interconnected nonlinear adaptive filters.
Neural networks are members of a class of software that have the potential to enable intelligent computational systems capable of simulating characteristics of biological thinking and learning. Currently no standards exist to verify and validate neural network-based systems. NASA Independent Verification and Validation Facility has contracted the Institute for Scientific Research, Inc. to perform research on this topic and develop a comprehensive guide to performing V&V on adaptive systems, with emphasis on neural networks used in safety-critical or mission-critical applications. Methods and Procedures for the Verification and Validation of Artificial Neural Networks is the culmination of the first steps in that research. This volume introduces some of the more promising methods and techniques used for the verification and validation (V&V) of neural networks and adaptive systems. A comprehensive guide to performing V&V on neural network systems, aligned with the IEEE Standard for Software Verification and Validation, will follow this book.
"Connectionism and the Mind" provides a clear and balanced
introduction to connectionist networks and explores their
theoretical and philosophical implications. As in the first edition, the first few chapters focus on network architecture and offer an accessible treatment of the equations that govern learning and the propagation of activation, including a glossary for reference. The reader is walked step-by-step through such tasks as memory retrieval and prototype formation. The middle chapters pursue the implications of connectionism's focus on pattern recognition and completion as fundamental to cognition. Some proponents of connectionism have emphasized these functions to the point of rejecting any role for linguistically structured representations and rules, resulting in heated debates with advocates of symbol processing accounts of cognition. The coverage of this controversy has been updated and augmented by a new chapter on modular networks. Finally, three new chapters discuss the relation of connectionism to three emerging research programs: dynamical systems theory, artificial life, and cognitive neuroscience.
Kubernetes is an essential tool for anyone deploying and managing cloud-native applications. It lays out a complete introduction to container technologies and containerized applications along with practical tips for efficient deployment and operation. This revised edition of the bestselling Kubernetes in Action contains new coverage of the Kubernetes architecture, including the Kubernetes API, and a deep dive into managing a Kubernetes cluster in production. In Kubernetes in Action, Second Edition, you'll start with an overview of how Docker containers work with Kubernetes and move quickly to building your first cluster. You'll gradually expand your initial application, adding features and deepening your knowledge of Kubernetes architecture and operation. As you navigate this comprehensive guide, you'll also appreciate thorough coverage of high-value topics like monitoring, tuning, and scaling Kubernetes in Action, Second Edition teaches you to use Kubernetes to deploy container-based distributed applications. You'll start with an overview of how Docker containers work with Kubernetes and move quickly to building your first cluster. You'll gradually expand your initial application, adding features and deepening your knowledge of Kubernetes architecture and operation. In this revised and expanded second edition, you'll take a deep dive into the structure of a Kubernetes-based application and discover how to manage a Kubernetes cluster in production. As you navigate this comprehensive guide, you'll also appreciate thorough coverage of high-value topics like monitoring, tuning, and scaling.
Sharpen your coding skills by exploring established computer science problems! Classic Computer Science Problems in Java challenges you with time-tested scenarios and algorithms. You'll work through a series of exercises based in computer science fundamentals that are designed to improve your software development abilities, improve your understanding of artificial intelligence, and even prepare you to ace an interview. Classic Computer Science Problems in Java will teach you techniques to solve common-but-tricky programming issues. You'll explore foundational coding methods, fundamental algorithms, and artificial intelligence topics, all through code-centric Java tutorials and computer science exercises. As you work through examples in search, clustering, graphs, and more, you'll remember important things you've forgotten and discover classic solutions to your "new" problems! Key Features * Recursion, memorization, bit manipulation * Search algorithms * Constraint-satisfaction problems * Graph algorithms * K-means clustering For intermediate Java programmers. About the technology In any computer science classroom you'll find a set of tried-and-true algorithms, techniques, and coding exercises. These techniques have stood the test of time as some of the best ways to solve problems when writing code, and expanding your Java skill set with these classic computer science methods will make you a better Java programmer. David Kopec is an assistant professor of computer science and innovation at Champlain College in Burlington, Vermont. He is the author of Dart for Absolute Beginners (Apress, 2014), Classic Computer Science Problems in Swift (Manning, 2018), and Classic Computer Science Problems in Python (Manning, 2019).
The articles gathered in this volume represent examples of a unique approach to the study of mental phenomena: a blend of theory and experiment, informed not just by easily measurable laboratory data but also by human introspection. Subjects such as approach and avoidance, desire and fear, and novelty and habit are studied as natural events that may not exactly correspond to, but at least correlate with, some (known or unknown) electrical and chemical events in the brain.
Develop New Insight into the Behavior of Adaptive Systems This one-of-a-kind interactive book and CD-ROM will help you develop a better understanding of the behavior of adaptive systems. Developed as part of a project aimed at innovating the teaching of adaptive systems in science and engineering, it unifies the concepts of neural networks and adaptive filters into a common framework. It begins by explaining the fundamentals of adaptive linear regression and builds on these concepts to explore pattern classification, function approximation, feature extraction, and time-series modeling/prediction. The text is integrated with the industry standard neural network/adaptive system simulator NeuroSolutions. This allows the authors to demonstrate and reinforce key concepts using over 200 interactive examples. Each of these examples is 'live,' allowing the user to change parameters and experiment first-hand with real-world adaptive systems. This creates a powerful environment for learning through both visualization and experimentation. Key Features of the Text
Neural networks and neural dynamics are powerful approaches for the online solution of mathematical problems arising in many areas of science, engineering, and business. Compared with conventional gradient neural networks that only deal with static problems of constant coefficient matrices and vectors, the authors' new method called zeroing dynamics solves time-varying problems. Zeroing Dynamics, Gradient Dynamics, and Newton Iterations is the first book that shows how to accurately and efficiently solve time-varying problems in real-time or online using continuous- or discrete-time zeroing dynamics. The book brings together research in the developing fields of neural networks, neural dynamics, computer mathematics, numerical algorithms, time-varying computation and optimization, simulation and modeling, analog and digital hardware, and fractals. The authors provide a comprehensive treatment of the theory of both static and dynamic neural networks. Readers will discover how novel theoretical results have been successfully applied to many practical problems. The authors develop, analyze, model, simulate, and compare zeroing dynamics models for the online solution of numerous time-varying problems, such as root finding, nonlinear equation solving, matrix inversion, matrix square root finding, quadratic optimization, and inequality solving. |
You may like...
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R13,686
Discovery Miles 136 860
Visual Object Tracking with Deep Neural…
Pier Luigi Mazzeo, Srinivasan Ramakrishnan, …
Hardcover
Biomedical and Business Applications…
Richard S Segall, Gao Niu
Hardcover
R7,022
Discovery Miles 70 220
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R13,702
Discovery Miles 137 020
Fuzzy Systems - Theory and Applications
Constantin Volosencu
Hardcover
Intelligent Analysis Of Fundus Images…
Yuanyuan Chen, Yi Zhang, …
Hardcover
R2,249
Discovery Miles 22 490
Icle Publications Plc-Powered Data…
Polly Patrick, Angela Peery
Paperback
|