Reducing the complexity of multiagent reinforcement learning

9 years 7 months ago
Reducing the complexity of multiagent reinforcement learning
It is known that the complexity of the reinforcement learning algorithms, such as Q-learning, may be exponential in the number of environment’s states. It was shown, however, that the learning complexity for the goal-directed problems may be substantially reduced by initializing the Q-values with a “good” approximative function. In the multiagent case, there exists such a good approximation for a big class of problems, namely, for goal-directed stochastic games. These games, for example, can reflect coordination and common interest problems of cooperative robotics. The approximative function for these games is nothing but the relaxed, singleagent, problem solution, which can easily be found by each agent individually. In this article, we show that (1) an optimal single-agent solution is a “good” approximation for the goal-directed stochastic games with action-penalty representation and (b) the complexity is reduced when the learning is initialized with this approximative fu...
Andriy Burkov, Brahim Chaib-draa
Added 07 Jun 2010
Updated 07 Jun 2010
Type Conference
Year 2007
Where ATAL
Authors Andriy Burkov, Brahim Chaib-draa
Comments (0)