Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
This excellent reference for all those involved in neural networks
research and application presents, in a single text, the necessary
aspects of parallel implementation for all major artificial neural
network models. The book details implementations on varoius
processor architectures (ring, torus, etc.) built on different
hardware platforms, ranging from large general purpose parallel
computers to custom built MIMD machines using transputers and DSPs.
Neural networks as the commonly used machine learning algorithms, such as artificial neural networks (ANNs) and convolutional neural networks (CNNs), have been extensively used in the GIScience domain to explore the nonlinear and complex geographic phenomena. However, there are a few studies that investigate the parameter settings of neural networks in GIScience. Moreover, the model performance of neural networks often depends on the parameter setting for a given dataset. Meanwhile, adjusting the parameter configuration of neural networks will increase the overall running time. Therefore, an automated approach is necessary for addressing these limitations in current studies. This book proposes an automated spatially explicit hyperparameter optimization approach to identify optimal or near-optimal parameter settings for neural networks in the GIScience field. Also, the approach improves the computing performance at both model and computing levels. This book is written for researchers of the GIScience field as well as social science subjects.
Artificial Neural Networks for Renewable Energy Systems and Real-World Applications presents current trends for the solution of complex engineering problems in the application, modeling, analysis, and optimization of different energy systems and manufacturing processes. With growing research catering to the applications of neural networks in specific industrial applications, this reference provides a single resource catering to a broader perspective of ANN in renewable energy systems and manufacturing processes. ANN-based methods have attracted the attention of scientists and researchers in different engineering and industrial disciplines, making this book a useful reference for all researchers and engineers interested in artificial networks, renewable energy systems, and manufacturing process analysis.
What happens in our brain when we make a decision? What triggers a neuron to send out a signal? What is the neural code? This textbook for advanced undergraduate and beginning graduate students provides a thorough and up-to-date introduction to the fields of computational and theoretical neuroscience. It covers classical topics, including the Hodgkin-Huxley equations and Hopfield model, as well as modern developments in the field such as Generalized Linear Models and decision theory. Concepts are introduced using clear step-by-step explanations suitable for readers with only a basic knowledge of differential equations and probabilities, and are richly illustrated by figures and worked-out examples. End-of-chapter summaries and classroom-tested exercises make the book ideal for courses or for self-study. The authors also give pointers to the literature and an extensive bibliography, which will prove invaluable to readers interested in further study.
This interdisciplinary graduate text gives a full, explicit, coherent and up-to-date account of the modern theory of neural information processing systems and is aimed at student with an undergraduate degree in any quantitative discipline (e.g. computer science, physics, engineering, biology, or mathematics). The book covers all the major theoretical developments from the 1940s tot he present day, using a uniform and rigorous style of presentation and of mathematical notation. The text starts with simple model neurons and moves gradually to the latest advances in neural processing. An ideal textbook for postgraduate courses in artificial neural networks, the material has been class-tested. It is fully self contained and includes introductions to the various discipline-specific mathematical tools as well as multiple exercises on each topic.
This interdisciplinary graduate text gives a full, explicit, coherent and up-to-date account of the modern theory of neural information processing systems and is aimed at student with an undergraduate degree in any quantitative discipline (e.g. computer science, physics, engineering, biology, or mathematics). The book covers all the major theoretical developments from the 1940s tot he present day, using a uniform and rigorous style of presentation and of mathematical notation. The text starts with simple model neurons and moves gradually to the latest advances in neural processing. An ideal textbook for postgraduate courses in artificial neural networks, the material has been class-tested. It is fully self contained and includes introductions to the various discipline-specific mathematical tools as well as multiple exercises on each topic.
Data-driven computational neuroscience facilitates the transformation of data into insights into the structure and functions of the brain. This introduction for researchers and graduate students is the first in-depth, comprehensive treatment of statistical and machine learning methods for neuroscience. The methods are demonstrated through case studies of real problems to empower readers to build their own solutions. The book covers a wide variety of methods, including supervised classification with non-probabilistic models (nearest-neighbors, classification trees, rule induction, artificial neural networks and support vector machines) and probabilistic models (discriminant analysis, logistic regression and Bayesian network classifiers), meta-classifiers, multi-dimensional classifiers and feature subset selection methods. Other parts of the book are devoted to association discovery with probabilistic graphical models (Bayesian networks and Markov networks) and spatial statistics with point processes (complete spatial randomness and cluster, regular and Gibbs processes). Cellular, structural, functional, medical and behavioral neuroscience levels are considered.
A solid introduction to the concepts and advanced applications of neural networks Since the 1980s, the field of neural networks has undergone exponential growth. Robots in manufacturing, mining, agriculture, space and ocean exploration, and health sciences are just a few examples of the challenging applications where human-like attributes such as cognition and intelligence are playing an important role. Neural networks and related areas such as fuzzy logic and soft-computing in general are also contributing to complex decision-making in such fields as health sciences, management, economics, politics, law, and administration. In the future, robots could evolve into electro-mechanical systems with cognitive skills approaching human intelligence. With a fascinating blend of heuristic concepts and mathematical rigor, Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory outlines the basic concepts behind neural networks and leads the reader onward to more advanced theory and applications. Pedagogically sound and clearly written, this text discusses:
Thoroughly surveying the many-faceted and increasingly influential field of neural networks, this is a valuable reference for both practitioner and student.
Develop neural network applications using the Java environment. After learning the rules involved in neural network processing, this second edition shows you how to manually process your first neural network example. The book covers the internals of front and back propagation and helps you understand the main principles of neural network processing. You also will learn how to prepare the data to be used in neural network development and you will be able to suggest various techniques of data preparation for many unconventional tasks. This book discusses the practical aspects of using Java for neural network processing. You will know how to use the Encog Java framework for processing large-scale neural network applications. Also covered is the use of neural networks for approximation of non-continuous functions. In addition to using neural networks for regression, this second edition shows you how to use neural networks for computer vision. It focuses on image recognition such as the classification of handwritten digits, input data preparation and conversion, and building the conversion program. And you will learn about topics related to the classification of handwritten digits such as network architecture, program code, programming logic, and execution. The step-by-step approach taken in the book includes plenty of examples, diagrams, and screenshots to help you grasp the concepts quickly and easily. What You Will Learn Use Java for the development of neural network applications Prepare data for many different tasks Carry out some unusual neural network processing Use a neural network to process non-continuous functions Develop a program that recognizes handwritten digits Who This Book Is For Intermediate machine learning and deep learning developers who are interested in switching to Java
Recently artificial-intelligence-based techniques (fuzzy logic, neural networks, fuzzy-neural networks, genetic algorithms, etc) have received increased attention world-wide and at present two industrial drives incorporate some form of artificial intelligence. This is the first comprehensive book which discusses numerous AI applications to electrical machines and drives. The drives considered are: d.c. drives, induction motor drives, synchronous motor drives, and switched reluctance motor drives. Sensorless drives are also considered. It is essential reading for anyone interested in acquiring a solid background in AI-based electrical machines and drives. It presents a detailed and unified mathematical and physical treatment.
This volume is the first diverse and comprehensive treatment of
algorithms and architectures for the realization of neural network
systems. It presents techniques and diverse methods in numerous
areas of this broad subject. The book covers major neural network
systems structures for achieving effective systems, and illustrates
them with examples.
This volume covers practical and effective implementation
techniques, including recurrent methods, Boltzmann machines,
constructive learning with methods for the reduction of complexity
in neural network systems, modular systems, associative memory,
neural network design based on the concept of the Inductive Logic
Unit, and a comprehensive treatment of implementations in the area
of data classification. Numerous examples enhance the text.
Practitioners, researchers, and students in engineering and
computer science will find Implementation Techniques a
comprehensive and powerful reference.
The aim of this book is to describe the types of computation that can be performed by biologically plausible neural networks, and to show how these may be implemented in different systems in the brain. Neural Networks and Brain Function is structured in three sections, each of which addresses a different need in the market. The first section introduces and describes the operation of several fundamental types of neural network. The second section describes real neural networks in several brain systems, and shows how it is becoming possible to construct theories about how some parts of the brain work; it also provides an indication of the different neuroscience and neurocomputation techniques that will need to be combined to ensure further rapid progress in understanding how parts of the brain work. The third section, a collection of appendices, introduces the more formal quantitative approaches to many of the networks described. This is a clearly written and thoughtfully structured introduction to a fascinating and complex field of neuroscience. It will be a key text for researchers, graduate students and advanced undergraduates in the field, particularly for those without a background in computer science.
This two-volume set LNCS 12861 and LNCS 12862 constitutes the refereed proceedings of the 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, held virtually, in June 2021.The 85 full papers presented in this two-volume set were carefully reviewed and selected from 134 submissions. The papers are organized in topical sections on Deep Learning for Biomedicine, Intelligent Computing Solutions for SARS-CoV-2 Covid-19, Advanced Topics in Computational Intelligence, Biosignals Processing, Neuro-Engineering and much more.
The aim of pattern theory is to create mathematical knowledge representations of complex systems, analyze the mathematical properties of the resulting regular structures, and to apply them to practically occurring patterns in nature and the man-made world. Starting from an algebraic formulation of such representations they are studied in terms of their topological, dynamical and probabilistic aspects. Patterns are expressed through their typical behavior as well as through their variability around their typical form. Employing the representations (regular structures) algorithms are derived for the understanding, recognition, and restoration of observed patterns. The algorithms are investigated through computer experiments. The book is intended for statisticians and mathematicians with an interest in image analysis and pattern theory.
This 1996 book is a reliable account of the statistical framework for pattern recognition and machine learning. With unparalleled coverage and a wealth of case-studies this book gives valuable insight into both the theory and the enormously diverse applications (which can be found in remote sensing, astrophysics, engineering and medicine, for example). So that readers can develop their skills and understanding, many of the real data sets used in the book are available from the author's website: www.stats.ox.ac.uk/~ripley/PRbook/. For the same reason, many examples are included to illustrate real problems in pattern recognition. Unifying principles are highlighted, and the author gives an overview of the state of the subject, making the book valuable to experienced researchers in statistics, machine learning/artificial intelligence and engineering. The clear writing style means that the book is also a superb introduction for non-specialists.
From the Foreword: "While large-scale machine learning and data mining have greatly impacted a range of commercial applications, their use in the field of Earth sciences is still in the early stages. This book, edited by Ashok Srivastava, Ramakrishna Nemani, and Karsten Steinhaeuser, serves as an outstanding resource for anyone interested in the opportunities and challenges for the machine learning community in analyzing these data sets to answer questions of urgent societal interest...I hope that this book will inspire more computer scientists to focus on environmental applications, and Earth scientists to seek collaborations with researchers in machine learning and data mining to advance the frontiers in Earth sciences." --Vipin Kumar, University of Minnesota Large-Scale Machine Learning in the Earth Sciences provides researchers and practitioners with a broad overview of some of the key challenges in the intersection of Earth science, computer science, statistics, and related fields. It explores a wide range of topics and provides a compilation of recent research in the application of machine learning in the field of Earth Science. Making predictions based on observational data is a theme of the book, and the book includes chapters on the use of network science to understand and discover teleconnections in extreme climate and weather events, as well as using structured estimation in high dimensions. The use of ensemble machine learning models to combine predictions of global climate models using information from spatial and temporal patterns is also explored. The second part of the book features a discussion on statistical downscaling in climate with state-of-the-art scalable machine learning, as well as an overview of methods to understand and predict the proliferation of biological species due to changes in environmental conditions. The problem of using large-scale machine learning to study the formation of tornadoes is also explored in depth. The last part of the book covers the use of deep learning algorithms to classify images that have very high resolution, as well as the unmixing of spectral signals in remote sensing images of land cover. The authors also apply long-tail distributions to geoscience resources, in the final chapter of the book.
Machine vision is the study of how to build intelligent machines which can understand the environment by vision. Among many existing books on this subject, this book is unique in that the entire volume is devoted to computational problems, which most books so not deal with. One of the main subjects of this book is the mathematics underlying all vision problems - projective geometry, in particular. Since projective geometry has been developed by mathematicians without any regard to machine vision applications, our first attempt is to `tune' it into the form applicable to machine vision problems. The resulting formulation is termed computational projective geometry and applied to 3-D shape analysis, camera calibration, road scene analysis, 3-D motion analysis, optical flow analysis, and conic image analysis. A salient characteristic of machine vision problems is that data are not necessarily accurate. Hence, computational procedures defined by using exact relationships may break down if blindly applied to inaccurate data. In this book, special emphasis is put on robustness, which means that the computed result is not only exact when the data are accurate but also is expected to give a good approximation in the prescence of noise. The analysis of how the computation is affected by the inaccuracy of the data is also crucial. Statistical analysis of computations based on image data is also one of the main subjects of this book.
Point-to-point vs. hub-and-spoke. Questions of network design are real and involve many billions of dollars. Yet little is known about optimizing design - nearly all work concerns optimizing flow assuming a given design. This foundational book tackles optimization of network structure itself, deriving comprehensible and realistic design principles. With fixed material cost rates, a natural class of models implies the optimality of direct source-destination connections, but considerations of variable load and environmental intrusion then enforce trunking in the optimal design, producing an arterial or hierarchical net. Its determination requires a continuum formulation, which can however be simplified once a discrete structure begins to emerge. Connections are made with the masterly work of Bendsoe and Sigmund on optimal mechanical structures and also with neural, processing and communication networks, including those of the Internet and the Worldwide Web. Technical appendices are provided on random graphs and polymer models and on the Klimov index.
Nowadays, many aspects of electrical and electronic engineering are essentially applications of DSP. This is due to the focus on processing information in the form of digital signals, using certain DSP hardware designed to execute software. Fundamental topics in digital signal processing are introduced with theory, analytical tables, and applications with simulation tools. The book provides a collection of solved problems on digital signal processing and statistical signal processing. The solutions are based directly on the math-formulas given in extensive tables throughout the book, so the reader can solve practical problems on signal processing quickly and efficiently. FEATURES Explains how applications of DSP can be implemented in certain programming environments designed for real time systems, ex. biomedical signal analysis and medical image processing. Pairs theory with basic concepts and supporting analytical tables. Includes an extensive collection of solved problems throughout the text. Fosters the ability to solve practical problems on signal processing without focusing on extended theory. Covers the modeling process and addresses broader fundamental issues.
This book uncovers the stakes and possibilities of handling pandemic diseases with the help of Computational Intelligence, using cases and applications from the current Covid-19 pandemic. The book chapters will focus on the application of CI and its related fields in managing different aspects of Covid-19, including modelling of the disease spread, data-driven prediction, identification of disease hotspots, and medical decision support.
State-of-the-art coverage of Kalman filter methods for the design of neural networks This self-contained book consists of seven chapters by expert contributors that discuss Kalman filtering as applied to the training and use of neural networks. Although the traditional approach to the subject is almost always linear, this book recognizes and deals with the fact that real problems are most often nonlinear. The first chapter offers an introductory treatment of Kalman filters with an emphasis on basic Kalman filter theory, Rauch-Tung-Striebel smoother, and the extended Kalman filter. Other chapters cover:
Each chapter, with the exception of the introduction, includes illustrative applications of the learning algorithms described here, some of which involve the use of simulated and real-life data. Kalman Filtering and Neural Networks serves as an expert resource for researchers in neural networks and nonlinear dynamical systems.
This book presents the necessary and essential backgrounds of fuzzy set theory and linear programming, particularly a broad range of common Fuzzy Linear Programming (FLP) models and related, convenient solution techniques. These models and methods belong to three common classes of fuzzy linear programming, namely: (i) FLP problems in which all coefficients are fuzzy numbers, (ii) FLP problems in which the right-hand-side vectors and the decision variables are fuzzy numbers, and (iii) FLP problems in which the cost coefficients, the right-hand-side vectors and the decision variables are fuzzy numbers. The book essentially generalizes the well-known solution algorithms used in linear programming to the fuzzy environment. Accordingly, it can be used not only as a textbook, teaching material or reference book for undergraduate and graduate students in courses on applied mathematics, computer science, management science, industrial engineering, artificial intelligence, fuzzy information processes, and operations research, but can also serve as a reference book for researchers in these fields, especially those engaged in optimization and soft computing. For textbook purposes, it also includes simple and illustrative examples to help readers who are new to the field.
This book offers a rigorous mathematical analysis of fuzzy geometrical ideas. It demonstrates the use of fuzzy points for interpreting an imprecise location and for representing an imprecise line by a fuzzy line. Further, it shows that a fuzzy circle can be used to represent a circle when its description is not known precisely, and that fuzzy conic sections can be used to describe imprecise conic sections. Moreover, it discusses fundamental notions on fuzzy geometry, including the concepts of fuzzy line segment and fuzzy distance, as well as key fuzzy operations, and includes several diagrams and numerical illustrations to make the topic more understandable. The book fills an important gap in the literature, providing the first comprehensive reference guide on the fuzzy mathematics of imprecise image subsets and imprecise geometrical objects. Mainly intended for researchers active in fuzzy optimization, it also includes chapters relevant for those working on fuzzy image processing and pattern recognition. Furthermore, it is a valuable resource for beginners interested in basic operations on fuzzy numbers, and can be used in university courses on fuzzy geometry, dealing with imprecise locations, imprecise lines, imprecise circles, and imprecise conic sections.
This introduction to spiking neurons can be used in advanced-level courses in computational neuroscience, theoretical biology, neural modeling, biophysics, or neural networks. It focuses on phenomenological approaches rather than detailed models in order to provide the reader with a conceptual framework. The authors formulate the theoretical concepts clearly without many mathematical details. While the book contains standard material for courses in computational neuroscience, neural modeling, or neural networks, it also provides an entry to current research. No prior knowledge beyond undergraduate mathematics is required. |
You may like...
Fuzzy Systems - Theory and Applications
Constantin Volosencu
Hardcover
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R13,702
Discovery Miles 137 020
Intelligent Analysis Of Fundus Images…
Yuanyuan Chen, Yi Zhang, …
Hardcover
R2,249
Discovery Miles 22 490
Avatar-Based Control, Estimation…
Vardan Mkrttchian, Ekaterina Aleshina, …
Hardcover
R7,046
Discovery Miles 70 460
Deep Neural Networks for Multimodal…
Annamalai Suresh, R. Udendhran, …
Hardcover
R7,950
Discovery Miles 79 500
Handbook of Research on Advanced…
Madhumangal Pal, Sovan Samanta, …
Hardcover
R7,051
Discovery Miles 70 510
|