Sciweavers

29 search results - page 1 / 6
» Value-Directed Belief State Approximation for POMDPs
Sort
View
UAI
2000
13 years 6 months ago
Value-Directed Belief State Approximation for POMDPs
We consider the problem belief-state monitoring for the purposes of implementing a policy for a partially-observable Markov decision process (POMDP), specifically how one might ap...
Pascal Poupart, Craig Boutilier
NIPS
2004
13 years 6 months ago
VDCBPI: an Approximate Scalable Algorithm for Large POMDPs
Existing algorithms for discrete partially observable Markov decision processes can at best solve problems of a few thousand states due to two important sources of intractability:...
Pascal Poupart, Craig Boutilier
IAT
2005
IEEE
13 years 11 months ago
Decomposing Large-Scale POMDP Via Belief State Analysis
Partially observable Markov decision process (POMDP) is commonly used to model a stochastic environment with unobservable states for supporting optimal decision making. Computing ...
Xin Li, William K. Cheung, Jiming Liu
NIPS
2007
13 years 6 months ago
What makes some POMDP problems easy to approximate?
Point-based algorithms have been surprisingly successful in computing approximately optimal solutions for partially observable Markov decision processes (POMDPs) in high dimension...
David Hsu, Wee Sun Lee, Nan Rong
UAI
2001
13 years 6 months ago
Vector-space Analysis of Belief-state Approximation for POMDPs
We propose a new approach to value-directed belief state approximationfor POMDPs. The valuedirected model allows one to choose approximation methods for belief state monitoringtha...
Pascal Poupart, Craig Boutilier