Sciweavers

83 search results - page 12 / 17
» Planning and Acting in Partially Observable Stochastic Domai...
Sort
View
AAAI
2011
13 years 9 months ago
Linear Dynamic Programs for Resource Management
Sustainable resource management in many domains presents large continuous stochastic optimization problems, which can often be modeled as Markov decision processes (MDPs). To solv...
Marek Petrik, Shlomo Zilberstein
FLAIRS
2009
14 years 7 months ago
Maintaining Focus: Overcoming Attention Deficit Disorder in Contingent Planning
In our experiments with four well-known systems for solving partially observable planning problems (Contingent-FF, MBP, PKS, and POND), we were greatly surprised to find that they...
Ronald Alford, Ugur Kuter, Dana S. Nau, Elnatan Re...
ATAL
2010
Springer
14 years 10 months ago
Closing the learning-planning loop with predictive state representations
A central problem in artificial intelligence is to choose actions to maximize reward in a partially observable, uncertain environment. To do so, we must learn an accurate model of ...
Byron Boots, Sajid M. Siddiqi, Geoffrey J. Gordon
ICML
2008
IEEE
15 years 10 months ago
Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent's knowledge and actions that ...
Finale Doshi, Joelle Pineau, Nicholas Roy
AIPS
2008
14 years 11 months ago
HiPPo: Hierarchical POMDPs for Planning Information Processing and Sensing Actions on a Robot
Flexible general purpose robots need to tailor their visual processing to their task, on the fly. We propose a new approach to this within a planning framework, where the goal is ...
Mohan Sridharan, Jeremy L. Wyatt, Richard Dearden