In this paper, we present a new algorithm that integrates recent advances in solving continuous bandit problems with sample-based rollout methods for planning in Markov Decision P...
Christopher R. Mansley, Ari Weinstein, Michael L. ...
We present an extension of state-based planning from traditional Strips to function application, allowing to express operator effects as updates. As proposed in PDDL, fluent variab...
We consider symbolic dynamic programming (SDP) for solving Markov Decision Processes (MDP) with factored state and action spaces, where both states and actions are described by se...
Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tad...
It has been shown recently that the complexity of belief tracking in deterministic conformant and contingent planning is exponential in a width parameter that is often bounded and...
The rapid growth of mobile devices has made it challenging for users to maintain a consistent digital history among all their personal devices. Even with a variety of cloud computi...