Sciweavers

AIPS
2000

Representations of Decision-Theoretic Planning Tasks

13 years 6 months ago
Representations of Decision-Theoretic Planning Tasks
Goal-directed Markov Decision Process models (GDMDPs) are good models for many decision-theoretic planning tasks. They have been used in conjunction with two different reward structures, namely the goal-reward representation and the action-penalty representation. We apply GDMDPs to planning tasks in the presence of traps such as steep slopes for outdoor robots or staircases for indoor robots, and study the differences between the two reward structures. In these situations, achieving the goal is often the primary objective while minimizing the travel time is only of secondary importance. We show that the action-penalty representation withoutdiscounting guaranteesthatthe optimalplan achieves the goal for sure (if this is possible) but neither the actionpenalty representation with discounting nor the goal-reward representation with discounting have this property. We then show exactly when this trapping phenomenon occurs, using a novel interpretation for discounting, namely that it models...
Sven Koenig, Yaxin Liu
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 2000
Where AIPS
Authors Sven Koenig, Yaxin Liu
Comments (0)