Nonparametric Bandits with Covariates

11 years 7 months ago
Nonparametric Bandits with Covariates
We consider a bandit problem which involves sequential sampling from two populations (arms). Each arm produces a noisy reward realization which depends on an observable random covariate. The goal is to maximize cumulative expected reward. We derive general lower bounds on the performance of any admissible policy, and develop an algorithm whose performance achieves the order of said lower bound up to logarithmic terms. This is done by decomposing the global problem into suitably "localized" bandit problems. Proofs blend ideas from nonparametric statistics and traditional methods used in the bandit literature. Mathematics Subject Classification: Primary 62G08, Secondary 62L12, 62L05, 62C20. Key Words: Bandit, regression, regret, inferior sampling rate, minimax rate.
Philippe Rigollet, Assaf Zeevi
Added 01 Mar 2011
Updated 01 Mar 2011
Type Journal
Year 2010
Where COLT
Authors Philippe Rigollet, Assaf Zeevi
Comments (0)