Sciweavers

ICRA
2009
IEEE

Least absolute policy iteration for robust value function approximation

13 years 11 months ago
Least absolute policy iteration for robust value function approximation
Abstract— Least-squares policy iteration is a useful reinforcement learning method in robotics due to its computational efficiency. However, it tends to be sensitive to outliers in observed rewards. In this paper, we propose an alternative method that employs the absolute loss for enhancing robustness and reliability. The proposed method is formulated as a linear programming problem which can be solved efficiently by standard optimization software, so the computational advantage is not sacrificed for gaining robustness and reliability. We demonstrate the usefulness of the proposed approach through simulated robot-control tasks.
Masashi Sugiyama, Hirotaka Hachiya, Hisashi Kashim
Added 23 May 2010
Updated 23 May 2010
Type Conference
Year 2009
Where ICRA
Authors Masashi Sugiyama, Hirotaka Hachiya, Hisashi Kashima, Tetsuro Morimura
Comments (0)