Sciweavers

ECML
2006
Springer

Task-Driven Discretization of the Joint Space of Visual Percepts and Continuous Actions

13 years 8 months ago
Task-Driven Discretization of the Joint Space of Visual Percepts and Continuous Actions
We target the problem of closed-loop learning of control policies that map visual percepts to continuous actions. Our algorithm, called Reinforcement Learning of Joint Classes (RLJC), adaptively discretizes the joint space of visual percepts and continuous actions. In a sequence of attempts to remove perceptual aliasing, it incrementally builds a decision tree that applies tests either in the input perceptual space or in the output action space. The leaves of such a decision tree induce a piecewise constant, optimal state-action value function, which is computed through a reinforcement learning algorithm that uses the tree as a function approximator. The optimal policy is then derived by selecting the action that, given a percept, leads to the leaf that maximizes the value function. Our approach is quite general and applies also to learning mappings from continuous percepts to continuous actions. A simulated visual navigation problem illustrates the applicability of RLJC.
Sébastien Jodogne, Justus H. Piater
Added 22 Aug 2010
Updated 22 Aug 2010
Type Conference
Year 2006
Where ECML
Authors Sébastien Jodogne, Justus H. Piater
Comments (0)