How to Explore your Opponent's Strategy (almost) Optimally

9 years 7 months ago
How to Explore your Opponent's Strategy (almost) Optimally
This work presents a lookahead-based exploration strategy for a model-based learning agent that enables exploration of the opponent's behavior during interaction in a multi-agent system. Instead of holding one model, the model-based agent maintains a mixed opponent model, a distribution over a set of models that reflects its uncertainty about the opponent's strategy. Every action is evaluated according to its long run contribution to the expected utility and to the knowledge regarding the opponent's strategy. We present an efficient algorithm that returns an almost optimal exploration strategy against a given mixed model, and a learning method for acquiring a mixed model consistent with the opponent's past behavior. We report experimental results in the Iterated Prisoner's Dilemma game that demonstrate the superiority of the lookahead-based exploration strategy over other exploration methods.
David Carmel, Shaul Markovitch
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 1998
Authors David Carmel, Shaul Markovitch
Comments (0)