Sciweavers

93 search results - page 3 / 19
» Computing Optimal Policies for Partially Observable Decision...
Sort
View
AAAI
2010
13 years 6 months ago
Relational Partially Observable MDPs
Relational Markov Decision Processes (MDP) are a useraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size ...
Chenggang Wang, Roni Khardon
AAAI
2006
13 years 6 months ago
Compact, Convex Upper Bound Iteration for Approximate POMDP Planning
Partially observable Markov decision processes (POMDPs) are an intuitive and general way to model sequential decision making problems under uncertainty. Unfortunately, even approx...
Tao Wang, Pascal Poupart, Michael H. Bowling, Dale...
WIOPT
2011
IEEE
12 years 9 months ago
Network utility maximization over partially observable Markovian channels
Abstract—This paper considers maximizing throughput utility in a multi-user network with partially observable Markov ON/OFF channels. Instantaneous channel states are never known...
Chih-Ping Li, Michael J. Neely
FLAIRS
2009
13 years 3 months ago
Dynamic Programming Approximations for Partially Observable Stochastic Games
Partially observable stochastic games (POSGs) provide a rich mathematical framework for planning under uncertainty by a group of agents. However, this modeling advantage comes wit...
Akshat Kumar, Shlomo Zilberstein
AAAI
2006
13 years 6 months ago
Incremental Least Squares Policy Iteration for POMDPs
We present a new algorithm, called incremental least squares policy iteration (ILSPI), for finding the infinite-horizon stationary policy for partially observable Markov decision ...
Hui Li, Xuejun Liao, Lawrence Carin