Sciweavers

Share
IJCAI
2003

Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings

11 years 1 months ago
Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings
The problem of deriving joint policies for a group of agents that maximize some joint reward function can be modeled as a decentralized partially observable Markov decision process (POMDP). Yet, despite the growing importance and applications of decentralized POMDP models in the multiagents arena, few algorithms have been developed for efficiently deriving joint policies for these models. This paper presents a new class of locally optimal algorithms called "Joint Equilibriumbased search for policies (JESP)". We first describe an exhaustive version of JESP and subsequently a novel dynamic programming approach to JESP. Our complexity analysis reveals the potential for exponential speedups due to the dynamic programming approach. These theoretical results are verified via empirical comparisons of the two JESP versions with each other and with a globally optimal brute-force search algorithm. Finally, we prove piece-wise linear and convexity (PWLC) properties, thus taking steps t...
Ranjit Nair, Milind Tambe, Makoto Yokoo, David V.
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2003
Where IJCAI
Authors Ranjit Nair, Milind Tambe, Makoto Yokoo, David V. Pynadath, Stacy Marsella
Comments (0)
books