Free Online Productivity Tools
i2Speak
i2Symbol
i2OCR
iTex2Img
iWeb2Print
iWeb2Shot
i2Type
iPdf2Split
iPdf2Merge
i2Bopomofo
i2Arabic
i2Style
i2Image
i2PDF
iLatex2Rtf
Sci2ools

AUSAI

2004

Springer

2004

Springer

How to assess the performance of machine learning algorithms is a problem of increasing interest and urgency as the data mining application of myriad algorithms grows. The standard approach of employing predictive accuracy has, we argue rightly, been losing favor in the AI community. The alternative of cost-sensitive metrics provides a far better approach, given the availability of useful cost functions. For situations where no useful cost function can be found we need other alternatives to predictive accuracy. We propose that information-theoretic reward functions be applied. The ﬁrst such proposal for assessing speciﬁcally machine learning algorithms was made by Kononenko and Bratko [1]. Here we improve upon our alternative Bayesian metric [2], which provides a fair betting assessment of any machine learner. We include an empirical analysis of various Bayesian classiﬁcation learners, ranging from Naive Bayes learners to causal discovery algorithms.

Related Content

Added |
01 Jul 2010 |

Updated |
01 Jul 2010 |

Type |
Conference |

Year |
2004 |

Where |
AUSAI |

Authors |
Lucas R. Hope, Kevin B. Korb |

Comments (0)