Sciweavers

Share
CORR
2010
Springer

On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards

8 years 5 months ago
On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards
We consider a combinatorial generalization of the classical multi-armed bandit problem that is defined as follows. There is a given bipartite graph of M users and N M resources. For each user-resource pair (i, j), there is an associated state that evolves as an aperiodic irreducible finite-state Markov chain with unknown parameters, with transitions occurring each time the particular user i is allocated resource j. The user i receives a reward that depends on the corresponding state each time it is allocated the resource j. The system objective is to learn the best matching of users to resources so that the longterm sum of the rewards received by all users is maximized. This corresponds to minimizing regret, defined here as the gap between the expected total reward that can be obtained by the best-possible static matching and the expected total reward that can be achieved by a given algorithm. We present a polynomialstorage and polynomial-complexity-per-step matching-learning algorith...
Yi Gai, Bhaskar Krishnamachari, Mingyan Liu
Added 29 May 2011
Updated 29 May 2011
Type Journal
Year 2010
Where CORR
Authors Yi Gai, Bhaskar Krishnamachari, Mingyan Liu
Comments (0)
books