Asymmetric Multiagent Reinforcement Learning

11 years 10 months ago
Asymmetric Multiagent Reinforcement Learning
A gradient-based method for both symmetric and asymmetric multiagent reinforcement learning is introduced in this paper. Symmetric multiagent reinforcement learning addresses the problem with agents involved in the learning task having equal information states. Respectively, in asymmetric multiagent reinforcement learning, the information states are not equal, i.e. some agents (leaders) try to encourage agents with less information (followers) to select actions that lead to improved overall utility value for the leaders. In both cases, there are a huge number of parameters to learn and we thus need to use some parametric function approximation methods to represent the value functions of the agents. The method proposed in this paper is based on the VAPS framework that is extended to utilize the theory of Markov games, which is a natural basis of multiagent reinforcement learning.
Ville Könönen
Added 04 Jul 2010
Updated 04 Jul 2010
Type Conference
Year 2003
Where IAT
Authors Ville Könönen
Comments (0)