Sciweavers

5 search results - page 1 / 1
» Speeding Up Planning in Markov Decision Processes via Automa...
Sort
View
ICML
2006
IEEE
13 years 10 months ago
Automatic basis function construction for approximate dynamic programming and reinforcement learning
We address the problem of automatically constructing basis functions for linear approximation of the value function of a Markov Decision Process (MDP). Our work builds on results ...
Philipp W. Keller, Shie Mannor, Doina Precup
AAAI
1996
13 years 6 months ago
Computing Optimal Policies for Partially Observable Decision Processes Using Compact Representations
: Partially-observable Markov decision processes provide a very general model for decision-theoretic planning problems, allowing the trade-offs between various courses of actions t...
Craig Boutilier, David Poole
ICML
2007
IEEE
14 years 5 months ago
Automatic shaping and decomposition of reward functions
This paper investigates the problem of automatically learning how to restructure the reward function of a Markov decision process so as to speed up reinforcement learning. We begi...
Bhaskara Marthi
IJRR
2011
218views more  IJRR 2011»
12 years 11 months ago
Motion planning under uncertainty for robotic tasks with long time horizons
Abstract Partially observable Markov decision processes (POMDPs) are a principled mathematical framework for planning under uncertainty, a crucial capability for reliable operation...
Hanna Kurniawati, Yanzhu Du, David Hsu, Wee Sun Le...