Sciweavers

ICRA
2008
IEEE

Bayesian reinforcement learning in continuous POMDPs with application to robot navigation

13 years 9 months ago
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
— We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system’s state, and 3) exploiting its knowledge in or...
Stéphane Ross, Brahim Chaib-draa, Joelle Pi
Added 30 May 2010
Updated 30 May 2010
Type Conference
Year 2008
Where ICRA
Authors Stéphane Ross, Brahim Chaib-draa, Joelle Pineau
Comments (0)