Sciweavers

ATAL
2006
Springer

Winning back the CUP for distributed POMDPs: planning over continuous belief spaces

13 years 7 months ago
Winning back the CUP for distributed POMDPs: planning over continuous belief spaces
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms have been proposed to obtain locally or globally optimal policies. Unfortunately, most of these algorithms have either been explicitly designed or experimentally evaluated assuming knowledge of a starting belief point, an assumption that often does not hold in complex, uncertain domains. Instead, in such domains, it is important for agents to explicitly plan over continuous belief spaces. This paper provides a novel algorithm to explicitly compute finite horizon policies over continuous belief spaces, without restricting the space of policies. By marrying an efficient single-agent POMDP solver with a heuristic distributed POMDP policy-generation algorithm, locally optimal joint policies are obtained, each of which dominates within a different part of the belief region. We provide heuristics that significantly i...
Pradeep Varakantham, Ranjit Nair, Milind Tambe, Ma
Added 20 Aug 2010
Updated 20 Aug 2010
Type Conference
Year 2006
Where ATAL
Authors Pradeep Varakantham, Ranjit Nair, Milind Tambe, Makoto Yokoo
Comments (0)