Sciweavers

UAI
2000

Fast Planning in Stochastic Games

13 years 5 months ago
Fast Planning in Stochastic Games
Stochastic games generalize Markov decision processes MDPs to a multiagent setting by allowing the state transitions to depend jointly on all player actions, and having rewards determined by multiplayer matrix games at each state. We consider the problem of computing Nash equilibria in stochastic games, the analogueof planningin MDPs. We begin by providing a generalization of nite-horizon value iteration that computes a Nash strategy for each player in generalsum stochastic games. The algorithm takes an arbitrary Nash selection function as input, which allows the translation of local choices between multiple Nash equilibria into the selection of a single global Nash equilibrium. Our main technical result is an algorithm for computingnear-Nash equilibriainlarge or innite state spaces. This algorithm builds on our nite-horizon value iteration algorithm, and adapts the sparse sampling methods of Kearns, Mansour and Ng 1999 to stochastic games. We conclude by describing a counterexample s...
Michael J. Kearns, Yishay Mansour, Satinder P. Sin
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 2000
Where UAI
Authors Michael J. Kearns, Yishay Mansour, Satinder P. Singh
Comments (0)