Sciweavers

AMAI
2011
Springer

Multi-armed bandits with episode context

12 years 4 months ago
Multi-armed bandits with episode context
A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0, 1] associated with that arm. We assume contextual side information is available at the start of the episode. This context enables an arm predictor to identify possible favorable arms, but predictions may be imperfect so that they need to be combined with further exploration during the episode. Our setting is an alternative to classical multiarmed bandits which provide no contextual side information, and is also an alternative to contextual bandits which provide new context each individual trial. Multi-armed bandits with episode context can arise naturally, for example in computer Go where context is used to bias move decisions made by a multi-armed bandit algorithm. The UCB1 algorithm for multi-armed bandits achieves worstcase O( p Kn log(n)) regret. We seek to improve this using episode context, particularly in the case where K is large. Using ...
Christopher D. Rosin
Added 12 Dec 2011
Updated 12 Dec 2011
Type Journal
Year 2011
Where AMAI
Authors Christopher D. Rosin
Comments (0)