Sciweavers

Share
NIPS
1993

Robust Reinforcement Learning in Motion Planning

8 years 9 months ago
Robust Reinforcement Learning in Motion Planning
While exploring to nd better solutions, an agent performing online reinforcement learning (RL) can perform worse than is acceptable. In some cases, exploration might have unsafe, or even catastrophic, results, often modeled in terms of reaching `failure' states of the agent's environment. This paper presents a method that uses domain knowledge to reduce the number of failures during exploration. This method formulates the set of actions from which the RL agent composes a control policy to ensure that exploration is conducted in a policy space that excludes most of the unacceptable . The resulting action set has a more abstract relationship to the task being solved than is common in many applications of RL. Although the cost of this added safety is that learning may result in a suboptimal solution, we argue that this is an appropriate tradeo in many problems. We illustrate this method in the domain of motion planning. This work was done while the rst author was nishing his Ph...
Satinder P. Singh, Andrew G. Barto, Roderic A. Gru
Added 02 Nov 2010
Updated 02 Nov 2010
Type Conference
Year 1993
Where NIPS
Authors Satinder P. Singh, Andrew G. Barto, Roderic A. Grupen, Christopher I. Connolly
Comments (0)
books