Sciweavers

AMAI
2011
Springer
12 years 4 months ago
Multi-armed bandits with episode context
A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0, 1] associated with that arm. We assum...
Christopher D. Rosin
LADC
2011
Springer
12 years 7 months ago
Timing Analysis of Leader-Based and Decentralized Byzantine Consensus Algorithms
—We compare in an analytical way two leader-based and decentralized algorithms (that is, algorithms that do not use a leader) for Byzantine consensus with strong validity. We sho...
Fatemeh Borran, Martin Hutle, André Schiper