Sciweavers

48 search results - page 1 / 10
» Approximate Planning in POMDPs with Macro-Actions
Sort
View
NIPS
2003
13 years 6 months ago
Approximate Planning in POMDPs with Macro-Actions
Recent research has demonstrated that useful POMDP solutions do not require consideration of the entire belief space. We extend this idea with the notion of temporal abstraction. ...
Georgios Theocharous, Leslie Pack Kaelbling
AAAI
2010
13 years 6 months ago
PUMA: Planning Under Uncertainty with Macro-Actions
Planning in large, partially observable domains is challenging, especially when a long-horizon lookahead is necessary to obtain a good policy. Traditional POMDP planners that plan...
Ruijie He, Emma Brunskill, Nicholas Roy
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 2 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
IJRR
2010
162views more  IJRR 2010»
13 years 3 months ago
Planning under Uncertainty for Robotic Tasks with Mixed Observability
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been app...
Sylvie C. W. Ong, Shao Wei Png, David Hsu, Wee Sun...
FLAIRS
2007
13 years 7 months ago
Dynamic DDN Construction for Lightweight Planning Architectures
POMDPs are a popular framework for representing decision making problems that contain uncertainty. The high computational complexity of finding exact solutions to POMDPs has spaw...
William H. Turkett