|
Showing 1 - 12 of
12 matches in All Departments
|
Artificial Neural Networks and Machine Learning -- ICANN 2014 - 24th International Conference on Artificial Neural Networks, Hamburg, Germany, September 15-19, 2014, Proceedings (Paperback, 2014 ed.)
Stefan Wermter, Cornelius Weber, Wlodzislaw Duch, Timo Honkela, Petia Koprinkova-Hristova, …
|
R1,739
Discovery Miles 17 390
|
Ships in 10 - 15 working days
|
The book constitutes the proceedings of the 24th International
Conference on Artificial Neural Networks, ICANN 2014, held in
Hamburg, Germany, in September 2014. The 107 papers included in the
proceedings were carefully reviewed and selected from 173
submissions. The focus of the papers is on following topics:
recurrent networks; competitive learning and self-organisation;
clustering and classification; trees and graphs; human-machine
interaction; deep networks; theory; reinforcement learning and
action; vision; supervised learning; dynamical models and time
series; neuroscience; and applications.
|
Artificial Neural Networks and Machine Learning -- ICANN 2013 - 23rd International Conference on Artificial Neural Networks, Sofia, Bulgaria, September 10-13, 2013, Proceedings (Paperback, 2013 ed.)
Valeri Mladenov, Petia Koprinkova-Hristova, G unther Palm, Alessandro Villa, Bruno Apolloni, …
|
R1,673
Discovery Miles 16 730
|
Ships in 10 - 15 working days
|
The book constitutes the proceedings of the 23rd International
Conference on Artificial Neural Networks, ICANN 2013, held in
Sofia, Bulgaria, in September 2013. The 78 papers included in the
proceedings were carefully reviewed and selected from 128
submissions. The focus of the papers is on following topics:
neurofinance graphical network models, brain machine interfaces,
evolutionary neural networks, neurodynamics, complex systems,
neuroinformatics, neuroengineering, hybrid systems, computational
biology, neural hardware, bioinspired embedded systems, and
collective intelligence.
The present collection of papers forms the Proceedings of the First
Meeting on Brain Theory, held October 1-4, 1984 at the
International Centre for Theoretical Physics in Trieste, Italy. The
Meeting was organized with the aim of bringing together brain
theorists who are willing to put their own research in the
perspective of the general development of neuroscience. Such a
meeting was considered necessary since the explosion of experi
mental work in neuroscience during the last decades has not been
accompanied by an adequate development on the theoretical side. The
intensity of the discussions during the Meeting is prob ably
reflected best in the report of the organizers, reprinted here
following the Preface. During the Meeting it was decided that a
workshop of this kind should be repeated at regular intervals of
approximately 2 years. The International Centre for Theoretical
Physics in Trieste has kindly agreed to act as host for future
meetings. The present Meeting was supported by grants from the In
ternational Centre for Theoretical Physics and the International
School for Advanced Studies in Trieste, IBM-Germany through the
"Stifterverband fur die Deutsche Wissenschaft" and the Max
Planck-Institute for Biological Cybernetics.
The book offers a new approach to information theory that is more
general then the classical approach by Shannon. The classical
definition of information is given for an alphabet of symbols or
for a set of mutually exclusive propositions (a partition of the
probability space ) with corresponding probabilities adding up to
1. The new definition is given for an arbitrary cover of , i.e. for
a set of possibly overlapping propositions. The generalized
information concept is called novelty and it is accompanied by two
new concepts derived from it, designated as information and
surprise, which describe "opposite" versions of novelty,
information being related more to classical information theory and
surprise being related more to the classical concept of statistical
significance. In the discussion of these three concepts and their
interrelations several properties or classes of covers are defined,
which turn out to be lattices. The book also presents applications
of these new concepts, mostly in statistics and in neuroscience.
|
Artificial Neural Networks and Machine Learning -- ICANN 2012 - 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part I (Paperback, 2012 ed.)
Alessandro Villa, Wlodzislaw Duch, Peter Erdi, Francesco Masulli, G unther Palm
|
R1,706
Discovery Miles 17 060
|
Ships in 10 - 15 working days
|
The two-volume set LNCS 7552 + 7553 constitutes the proceedings of
the 22nd International Conference on Artificial Neural Networks,
ICANN 2012, held in Lausanne, Switzerland, in September 2012. The
162 papers included in the proceedings were carefully reviewed and
selected from 247 submissions. They are organized in topical
sections named: theoretical neural computation; information and
optimization; from neurons to neuromorphism; spiking dynamics; from
single neurons to networks; complex firing patterns; movement and
motion; from sensation to perception; object and face recognition;
reinforcement learning; bayesian and echo state networks; recurrent
neural networks and reservoir computing; coding architectures;
interacting with the brain; swarm intelligence and decision-making;
mulitlayer perceptrons and kernel networks; training and learning;
inference and recognition; support vector machines; self-organizing
maps and clustering; clustering, mining and exploratory analysis;
bioinformatics; and time weries and forecasting.
|
Artificial Neural Networks and Machine Learning -- ICANN 2012 - 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part II (Paperback, 2012 ed.)
Alessandro Villa, Wlodzislaw Duch, Peter Erdi, Francesco Masulli, G unther Palm
|
R1,658
Discovery Miles 16 580
|
Ships in 10 - 15 working days
|
The two-volume set LNCS 7552 + 7553 constitutes the proceedings of
the 22nd International Conference on Artificial Neural Networks,
ICANN 2012, held in Lausanne, Switzerland, in September 2012. The
162 papers included in the proceedings were carefully reviewed and
selected from 247 submissions. They are organized in topical
sections named: theoretical neural computation; information and
optimization; from neurons to neuromorphism; spiking dynamics; from
single neurons to networks; complex firing patterns; movement and
motion; from sensation to perception; object and face recognition;
reinforcement learning; bayesian and echo state networks; recurrent
neural networks and reservoir computing; coding architectures;
interacting with the brain; swarm intelligence and decision-making;
mulitlayer perceptrons and kernel networks; training and learning;
inference and recognition; support vector machines; self-organizing
maps and clustering; clustering, mining and exploratory analysis;
bioinformatics; and time weries and forecasting.
This book presents research performed as part of the EU project on
biomimetic multimodal learning in a mirror neuron-based robot
(MirrorBot) and contri- tions presented at the International
AI-Workshop on NeuroBotics. The ov- all aim of the book is to
present a broad spectrum of current research into biomimetic neural
learning for intelligent autonomous robots. There is a need for a
new type of robot which is inspired by nature and so performs in a
more ?exible learned manner than current robots. This new type of
robot is driven by recent new theories and experiments in
neuroscience indicating that a biological and neuroscience-oriented
approach could lead to new life-like robotic systems. The book
focuses on some of the research progress made in the MirrorBot
project which uses concepts from mirror neurons as a basis for the
integration of vision, language and action. In this book we show
the development of new techniques using cell assemblies,
associative neural networks, and Hebbian-type learning in order to
associate vision, language and motor concepts. We have developed
biomimetic multimodal learning and language instruction in a robot
to investigate the task ofsearching for objects. As well as the
researchperformed in this area for the MirrorBot project, the
second part of this book incorporates signi?cant contributions from
other research in the ?eld of biomimetic robotics. This second part
of the book concentrates on the progress made in neuroscience
inspired robotic learning approaches (in short: NeuroBotics).
|
KI 2004: Advances in Artificial Intelligence - 27th Annual German Conference in AI, KI 2004, Ulm, Germany, September 20-24, 2004, Proceedings (Paperback, 2004 ed.)
Susanne Biundo, Thom Fruhwirth, G unther Palm
|
R1,777
Discovery Miles 17 770
|
Ships in 10 - 15 working days
|
KI2004wasthe27theditionoftheannualGermanConferenceonArti?cialInt-
ligence, which traditionally brings together academic and
industrial researchers from all areas of AI and which enjoys
increasing international attendance. KI 2004 received 103
submissions from 26 countries. This volume contains the 30 papers
that were ?nally selected for presentation at the conference. The
papers cover quite a broad spectrum of "classical" subareas of AI,
like na- ral language processing, neural networks, knowledge
representation, reasoning, planning, and search. When looking at
this year's contributions, it was exciting to observe that there
was a strong trend towards actual real-world applications of AI
technology. A majority of contributions resulted from or were
motivated by applications in a variety of areas. Examples include
applications of pl- ning, where the technology is being exploited
for taxiway tra?c control and game playing; natural language
processing and knowledge representation are enabling advanced
Web-based information processing; and the integration of - sults
from automated reasoning, neural networks and machine perception
into robotics leads to signi?cantly improved capabilities of
autonomous systems. The technical programme of KI 2004 was
highlighted by invited talks from outstanding researchers in the
areas of automated reasoning, robot planning, constraintreasoning,
machinelearning, andsemanticWeb: Jorg ] Siekmann(DFKI
andUniversityofSaarland, Saarbruc ] ken), MalikGhallab(LAAS-CNRS,
Toulouse), Franco, is Fages (INRIA Rocquencourt), Martin Riedmiller
(University of - nabru ]ck),
andWolfgangWahlster(DFKIandUniversityofSaarland, Saarbruc ] ken).
Their invited papers are also presented in this volume."
The focus of the papers presented in these proceedings is on
employing various methodologies and approaches for solving
real-life problems. Although the mechanisms that the human brain
employs to solve problems are not yet completely known, we do have
good insight into the functional processing performed by the human
mind. On the basis of the understanding of these natural processes,
scientists in the field of applied intelligence have developed
multiple types of artificial processes, and have employed them
successfully in solving real-life problems. The types of approaches
used to solve problems are dependant on both the nature of the
problem and the expected outcome. While knowledge-based systems are
useful for solving problems in well-understood domains with
relatively stable environments, the approach may fail when the
domain knowledge is either not very well understood or changing
rapidly. The techniques of data discovery through data mining will
help to alleviate some problems faced by knowledge-based approaches
to solving problems in such domains. Research and development in
the area of artificial intelligence are influenced by opportunity,
needs, and the availability of resources. The rapid advancement of
Internet technology and the trend of increasing bandwidths provide
an opportunity and a need for intelligent information processing,
thus creating an excellent opportunity for agent-based computations
and learning. Over 40% of the papers appearing in the conference
proceedings focus on the area of machine learning and intelligent
agents - clear evidence of growing interest in this area.
This revised edition offers an approach to information theory that
is more general than the classical approach of Shannon.
Classically, information is defined for an alphabet of symbols or
for a set of mutually exclusive propositions (a partition of the
probability space ) with corresponding probabilities adding up to
1. The new definition is given for an arbitrary cover of , i.e. for
a set of possibly overlapping propositions. The generalized
information concept is called novelty and it is accompanied by two
concepts derived from it, designated as information and surprise,
which describe "opposite" versions of novelty, information being
related more to classical information theory and surprise being
related more to the classical concept of statistical significance.
In the discussion of these three concepts and their interrelations
several properties or classes of covers are defined, which turn out
to be lattices. The book also presents applications of these
concepts, mostly in statistics and in neuroscience.
This book brings together a selection of papers by George Gerstein,
representing his long-term endeavor of making neuroscience into a
more rigorous science inspired by physics, where he had his roots.
Professor Gerstein was many years ahead of the field, consistently
striving for quantitative analyses, mechanistic models, and
conceptual clarity. In doing so, he pioneered Computational
Neuroscience, many years before the term itself was born. The
overarching goal of George Gerstein's research was to understand
the functional organization of neuronal networks in the brain. The
editors of this book have compiled a selection of George Gerstein's
many seminal contributions to neuroscience--be they experimental,
theoretical or computational--into a single, comprehensive volume
.The aim is to provide readers with a fresh introduction of these
various concepts in the original literature. The volume is
organized in a series of chapters by subject, ordered in time, each
one containing one or more of George Gerstein's papers.
In the new edition of Neural Assemblies, the author places his
original ideas and motivations within the framework of modern and
cognitive neuroscience and gives a short and focused overview of
the development of computational neuroscience and artificial neural
networks over the last 40 years. In this book the author develops a
theory of how the human brain might function. Starting with a
motivational introduction to the brain as an organ of information
processing, he presents a computational perspective on the basic
concepts and ideas of neuroscience research on the underlying
principles of brain function. In addition, the reader is introduced
to the most important methods from computer science and
mathematical modeling that are required for a computational
understanding of information processing in the brain. Written by an
expert in the field of neural information processing, this book
offers a personal historical view of the development of artificial
intelligence, artificial neural networks, and computational
cognitive neuroscience over the last 40 years, with a focus on the
realization of higher cognitive functions rather than more
peripheral sensory or motor organization. The book is therefore
aimed at students and researchers who want to understand how the
basic neuroscientific and computational concepts in the study of
brain function have changed over the last decades.
|
|