Sciweavers

NIPS
2008

Algorithms for Infinitely Many-Armed Bandits

13 years 5 months ago
Algorithms for Infinitely Many-Armed Bandits
We consider multi-armed bandit problems where the number of arms is larger than the possible number of experiments. We make a stochastic assumption on the mean-reward of a new selected arm which characterizes its probability of being a near-optimal arm. Our assumption is weaker than in previous works. We describe algorithms based on upper-confidence-bounds applied to a restricted set of randomly selected arms and provide upper-bounds on the resulting expected regret. We also derive a lower-bound which matches (up to a logarithmic factor) the upper-bound in some cases.
Yizao Wang, Jean-Yves Audibert, Rémi Munos
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2008
Where NIPS
Authors Yizao Wang, Jean-Yves Audibert, Rémi Munos
Comments (0)