Sciweavers

IJCAI
2003

Generalizing Plans to New Environments in Relational MDPs

13 years 5 months ago
Generalizing Plans to New Environments in Relational MDPs
A longstanding goal in planning research is the ability to generalize plans developed for some set of environments to a new but similar environment, with minimal or no replanning. Such generalization can both reduce planning time and allow us to tackle larger domains than the ones tractable for direct planning. In this paper, we present an approach to the generalization problem based on a new framework of relational Markov Decision Processes (RMDPs). An RMDP can model a set of similar environments by representing objects as instances of different classes. In order to generalize plans to multiple environments, we define an approximate value function specified in terms of classes of objects and, in a multiagent setting, by classes of agents. This class-based approximate value function is optimized relative to a sampled subset of environments, and computed using an efficient linear programming method. We prove that a polynomial number of sampled environments suffices to achieve perfo...
Carlos Guestrin, Daphne Koller, Chris Gearhart, Ne
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2003
Where IJCAI
Authors Carlos Guestrin, Daphne Koller, Chris Gearhart, Neal Kanodia
Comments (0)