Abstract. Classifiers based on Gaussian mixture models are good performers in many pattern recognition tasks. Unlike decision trees, they can be described as stable classifier: a s...
Jonas Richiardi, Andrzej Drygajlo, Laetitia Todesc...
It is well-known that diversity among base classifiers is crucial for constructing a strong ensemble. Most existing ensemble methods obtain diverse individual learners through res...
—When performing predictive data mining, the use of ensembles is known to increase prediction accuracy, compared to single models. To obtain this higher accuracy, ensembles shoul...
Atypical observations, which are called outliers, are one of difficulties to apply standard Gaussian density based pattern classification methods. Large number of outliers makes di...
In this paper we present a novel strategy, DragPushing, for improving the performance of text classifiers. The strategy is generic and takes advantage of training errors to succes...
Songbo Tan, Xueqi Cheng, Moustafa Ghanem, Bin Wang...
Boosting methods are known not to usually overfit training data even as the size of the generated classifiers becomes large. Schapire et al. attempted to explain this phenomenon i...
Many of today's best classification results are obtained by combining the responses of a set of base classifiers to produce an answer for the query. This paper explores a nov...
When more than a single classifier has been trained for the same recognition problem the question arises how this set of classifiers may be combined into a final decision rule. Se...