Sciweavers

CORR
2010
Springer

The Non-Bayesian Restless Multi-Armed Bandit: a Case of Near-Logarithmic Regret

13 years 1 months ago
The Non-Bayesian Restless Multi-Armed Bandit: a Case of Near-Logarithmic Regret
In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are N arms, with rewards on all arms evolving at each time as Markov chains with known parameters. A player seeks to activate K 1 arms at each time in order to maximize the expected total reward obtained over multiple plays. RMAB is a challenging problem that is known to be PSPACE-hard in general. We consider in this work the even harder non-Bayesian RMAB, in which the parameters of the Markov chain are assumed to be unknown a priori. We develop an original approach to this problem that is applicable when the corresponding Bayesian problem has the structure that, depending on the known parameter values, the optimal solution is one of a prescribed finite set of policies. In such settings, we propose to learn the optimal policy for the non-Bayesian RMAB by employing a suitable meta-policy which treats each policy from this finite set as an arm in a different non-Bayesian multi-armed bandit problem for which a sin...
Wenhan Dai, Yi Gai, Bhaskar Krishnamachari, Qing Z
Added 22 Mar 2011
Updated 22 Mar 2011
Type Journal
Year 2010
Where CORR
Authors Wenhan Dai, Yi Gai, Bhaskar Krishnamachari, Qing Zhao
Comments (0)