Reinforcement learning is the problem faced by an agent that must
learn behavior through trial-and-error interactions with a dynamic
environment. Usually, the problem to be solved contains subtasks
that repeat at different regions of the state space. Without any
guidance an agent has to learn the solutions of all subtask
instances independently, which in turn degrades the performance of
the learning process. In this work, we propose two novel approaches
for building the connections between different regions of the
search space. The first approach efficiently discovers abstractions
in the form of conditionally terminating sequences and represents
these abstractions compactly as a single tree structure; this
structure is then used to determine the actions to be executed by
the agent. In the second approach, a similarity function between
states is defined based on the number of common action sequences;
by using this similarity function, updates on the action-value
function of a state are re�ected to all similar states that allows
experience acquired during learning be applied to a broader
context. The effectiveness of both approaches is demonstrated
empirically over various domains.
General
Imprint: |
VDM Verlag
|
Country of origin: |
Germany |
Release date: |
March 2009 |
First published: |
March 2009 |
Authors: |
Sertan Girgin
|
Dimensions: |
229 x 152 x 6mm (L x W x T) |
Format: |
Paperback - Trade
|
Pages: |
104 |
ISBN-13: |
978-3-639-13652-4 |
Categories: |
Books >
Computing & IT >
General theory of computing >
General
|
LSN: |
3-639-13652-7 |
Barcode: |
9783639136524 |
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!