Sciweavers

ECML
2001
Springer

Comparing the Bayes and Typicalness Frameworks

13 years 9 months ago
Comparing the Bayes and Typicalness Frameworks
When correct priors are known, Bayesian algorithms give optimal decisions, and accurate confidence values for predictions can be obtained. If the prior is incorrect however, these confidence values have no theoretical base – even though the algorithms’ predictive performance may be good. There also exist many successful learning algorithms which only depend on the iid assumption. Often however they produce no confidence values for their predictions. Bayesian frameworks are often applied to these algorithms in order to obtain such values, however they can rely on unjustified priors. In this paper1 we outline the typicalness framework which can be used in conjunction with many other machine learning algorithms. The framework provides confidence information based only on the standard iid assumption and so is much more robust to different underlying data distributions. We show how the framework can be applied to existing algorithms. We also present experimental results which show...
Thomas Melluish, Craig Saunders, Ilia Nouretdinov,
Added 28 Jul 2010
Updated 28 Jul 2010
Type Conference
Year 2001
Where ECML
Authors Thomas Melluish, Craig Saunders, Ilia Nouretdinov, Volodya Vovk
Comments (0)