Sciweavers

AAMAS
2005
Springer

Learning to Coordinate Using Commitment Sequences in Cooperative Multi-agent Systems

13 years 10 months ago
Learning to Coordinate Using Commitment Sequences in Cooperative Multi-agent Systems
We report on an investigation of the learning of coordination in cooperative multi-agent systems. Specifically, we study solutions that are applicable to independent agents i.e. agents that do not observe one another’s actions. In previous research (Kapetanakis and Kudenko, 2002) we have presented a reinforcement learning approach that converges to the optimal joint action even in scenarios with high miscoordination costs. However, this approach failed in fully stochastic environments. In this paper, we present a novel approach based on reward estimation with a shared action-selection protocol. The new technique is applicable in fully stochastic environments where mutual observation of actions is not possible. We demonstrate empirically that our approach causes the agents to converge almost always to the optimal joint action even in difficult stochastic scenarios with high miscoordination penalties.
Spiros Kapetanakis, Daniel Kudenko, Malcolm J. A.
Added 26 Jun 2010
Updated 26 Jun 2010
Type Conference
Year 2005
Where AAMAS
Authors Spiros Kapetanakis, Daniel Kudenko, Malcolm J. A. Strens
Comments (0)