Sciweavers

ATAL
2008
Springer

Value-based observation compression for DEC-POMDPs

13 years 6 months ago
Value-based observation compression for DEC-POMDPs
Representing agent policies compactly is essential for improving the scalability of multi-agent planning algorithms. In this paper, we focus on developing a pruning technique that allows us to merge certain observations within agent policies, while minimizing loss of value. This is particularly important for solving finite-horizon decentralized POMDPs, where agent policies are represented as trees, and where the size of policy trees grows exponentially with the number of observations. We introduce a value-based observation compression technique that prunes the least valuable observations while maintaining an error bound on the value lost as a result of pruning. We analyze the characteristics of this pruning strategy and show empirically that it is effective. Thus, we use compact policies to obtain significantly higher values compared with the best existing DEC-POMDP algorithm. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General T...
Alan Carlin, Shlomo Zilberstein
Added 12 Oct 2010
Updated 12 Oct 2010
Type Conference
Year 2008
Where ATAL
Authors Alan Carlin, Shlomo Zilberstein
Comments (0)