Sciweavers

SAC
2015
ACM

Evaluating defect prediction approaches using a massive set of metrics: an empirical study

8 years 7 days ago
Evaluating defect prediction approaches using a massive set of metrics: an empirical study
To evaluate the performance of a within-project defect prediction approach, people normally use precision, recall, and F-measure scores. However, in machine learning literature, there are a large number of evaluation metrics to evaluate the performance of an algorithm, (e.g., Matthews Correlation Coefficient, G-means, etc.), and these metrics evaluate an approach from different aspects. In this paper, we investigate the performance of within-project defect prediction approaches on a large number of evaluation metrics. We choose 6 state-of-the-art approaches including naive Bayes, decision tree, logistic regression, kNN, random forest and Bayesian network which are widely used in defect prediction literature. And we evaluate these 6 approaches on 14 evaluation metrics (e.g., G-mean, F-measure, balance, MCC, Jcoefficient, and AUC). Our goal is to explore a practical and sophisticated way for evaluating the prediction approaches comprehensively. We evaluate the performance of defect pre...
Xiao Xuan, David Lo, Xin Xia, Yuan Tian
Added 17 Apr 2016
Updated 17 Apr 2016
Type Journal
Year 2015
Where SAC
Authors Xiao Xuan, David Lo, Xin Xia, Yuan Tian
Comments (0)