Free Online Productivity Tools
i2Speak
i2Symbol
i2OCR
iTex2Img
iWeb2Print
iWeb2Shot
i2Type
iPdf2Split
iPdf2Merge
i2Bopomofo
i2Arabic
i2Style
i2Image
i2PDF
iLatex2Rtf
Sci2ools

TIT

1998

1998

Abstract— We consider adaptive sequential prediction of arbitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) prediction strategies, called experts. By using a general loss function, we generalize previous work on universal prediction, forecasting, and data compression. However, here we restrict ourselves to the case when the comparison class is ﬁnite. For a given sequence, we deﬁne the regret as the total loss on the entire sequence suffered by the adaptive sequential predictor, minus the total loss suffered by the predictor in the comparison class that performs best on that particular sequence. We show that for a large class of loss functions, the minimax regret is either 2(log N ) or (p` log N ), depending on the loss function, where N is the number of predictors in the comparison class and ` ...

Related Content

Added |
23 Dec 2010 |

Updated |
23 Dec 2010 |

Type |
Journal |

Year |
1998 |

Where |
TIT |

Authors |
David Haussler, Jyrki Kivinen, Manfred K. Warmuth |

Comments (0)