Sciweavers

6 search results - page 1 / 2
» Experience-efficient learning in associative bandit problems
Sort
View
ICML
2006
IEEE
14 years 5 months ago
Experience-efficient learning in associative bandit problems
We formalize the associative bandit problem framework introduced by Kaelbling as a learning-theory problem. The learning environment is modeled as a k-armed bandit where arm payof...
Alexander L. Strehl, Chris Mesterharm, Michael L. ...
CORR
2010
Springer
175views Education» more  CORR 2010»
12 years 10 months ago
On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards
We consider a combinatorial generalization of the classical multi-armed bandit problem that is defined as follows. There is a given bipartite graph of M users and N M resources. F...
Yi Gai, Bhaskar Krishnamachari, Mingyan Liu
ALT
2008
Springer
14 years 1 months ago
Active Learning in Multi-armed Bandits
In this paper we consider the problem of actively learning the mean values of distributions associated with a finite number of options (arms). The algorithms can select which opti...
András Antos, Varun Grover, Csaba Szepesv&a...
COLT
2008
Springer
13 years 6 months ago
Adapting to a Changing Environment: the Brownian Restless Bandits
In the multi-armed bandit (MAB) problem there are k distributions associated with the rewards of playing each of k strategies (slot machine arms). The reward distributions are ini...
Aleksandrs Slivkins, Eli Upfal

Publication
466views
14 years 2 months ago
Multi-Armed Bandit Mechanisms for Multi-Slot Sponsored Search Auctions
In pay-per click sponsored search auctions which are cur- rently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) c...
Akash Das Sarma, Sujit Gujar, Y. Narahari