Sciweavers

TMC
2011
137views more  TMC 2011»
12 years 11 months ago
Cognitive Medium Access: Exploration, Exploitation, and Competition
— This paper establishes the equivalence between cognitive medium access and the competitive multi-armed bandit problem. First, the scenario in which a single cognitive user wish...
Lifeng Lai, Hesham El Gamal, Hai Jiang, H. Vincent...
COLT
2010
Springer
13 years 2 months ago
An Asymptotically Optimal Bandit Algorithm for Bounded Support Models
Multiarmed bandit problem is a typical example of a dilemma between exploration and exploitation in reinforcement learning. This problem is expressed as a model of a gambler playi...
Junya Honda, Akimichi Takemura
SIAMCOMP
2002
124views more  SIAMCOMP 2002»
13 years 4 months ago
The Nonstochastic Multiarmed Bandit Problem
Abstract. In the multiarmed bandit problem, a gambler must decide which arm of K nonidentical slot machines to play in a sequence of trials so as to maximize his reward. This class...
Peter Auer, Nicolò Cesa-Bianchi, Yoav Freun...
CORR
2008
Springer
136views Education» more  CORR 2008»
13 years 4 months ago
Multi-Armed Bandits in Metric Spaces
In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of n trials so as to maximize the total payoff of the chosen strategies. While ...
Robert Kleinberg, Aleksandrs Slivkins, Eli Upfal
LION
2010
Springer
190views Optimization» more  LION 2010»
13 years 8 months ago
Algorithm Selection as a Bandit Problem with Unbounded Losses
Abstract. Algorithm selection is typically based on models of algorithm performance learned during a separate offline training sequence, which can be prohibitively expensive. In r...
Matteo Gagliolo, Jürgen Schmidhuber