Sciweavers

ICANN
2009
Springer

Efficient Uncertainty Propagation for Reinforcement Learning with Limited Data

13 years 8 months ago
Efficient Uncertainty Propagation for Reinforcement Learning with Limited Data
In a typical reinforcement learning (RL) setting details of the environment are not given explicitly but have to be estimated from observations. Most RL approaches only optimize the expected value. However, if the number of observations is limited considering expected values only can lead to false conclusions. Instead, it is crucial to also account for the estimator's uncertainties. In this paper, we present a method to incorporate those uncertainties and propagate them to the conclusions. By being only approximate, the method is computationally feasible. Furthermore, we describe a Bayesian approach to design the estimators. Our experiments show that the method considerably increases the robustness of the derived policies compared to the standard approach. Key words: reinforcement learning, model-based, uncertainty, Bayesian modeling
Alexander Hans, Steffen Udluft
Added 16 Aug 2010
Updated 16 Aug 2010
Type Conference
Year 2009
Where ICANN
Authors Alexander Hans, Steffen Udluft
Comments (0)