Implicit Imitation in Multiagent Reinforcement Learning

10 years 3 months ago
Implicit Imitation in Multiagent Reinforcement Learning
Imitation is actively being studied as an effective means of learning in multi-agent environments. It allows an agent to learn how to act well (perhaps optimally) by passively observing the actions of cooperative teachers or other more experienced agents its environment. We propose a straightforward imitation mechanism called model extraction that can be integrated easily into standard model-based reinforcement learning algorithms. Roughly, by observing a mentor with similar capabilities, an agent can extract information about its own capabilities in unvisited parts of state space. The extracted information can accelerate learning dramatically. We illustrate the benefits of model extraction by integrating it with prioritized sweeping, and demonstrating improved performance and convergence through observation of single and multiple mentors. Though we make some stringent assumptions regarding observability, possible interactions and common abilities, we briefly comment on extensions of ...
Bob Price, Craig Boutilier
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 1999
Where ICML
Authors Bob Price, Craig Boutilier
Comments (0)