Sciweavers

Share
IROS
2009
IEEE

Bayesian reinforcement learning in continuous POMDPs with gaussian processes

8 years 10 months ago
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
— Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle realworld sequential decision processes but require a known model to be solved by most approaches. However, mainstream POMDP research focuses on the discrete case and this complicates its application to most realistic problems that are naturally modeled using continuous state spaces. In this paper, we consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are unknown. We advocate the use of Gaussian Process Dynamical Models (GPDMs) so that we can learn the model through experience with the environment. Our results on the blimp problem show that the approach can learn good models of the sensors and actuators in order to maximize long-term rewards.
Patrick Dallaire, Camille Besse, Stéphane R
Added 24 May 2010
Updated 24 May 2010
Type Conference
Year 2009
Where IROS
Authors Patrick Dallaire, Camille Besse, Stéphane Ross, Brahim Chaib-draa
Comments (0)
books