Sciweavers

266 search results - page 1 / 54
» Is Combining Classifiers Better than Selecting the Best One
Sort
View
ICML
2002
IEEE
14 years 6 months ago
Is Combining Classifiers Better than Selecting the Best One
We empirically evaluate several state-of-theart methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to se...
Saso Dzeroski, Bernard Zenko
COLING
2008
13 years 6 months ago
Normalizing SMS: are Two Metaphors Better than One ?
Electronic written texts used in computermediated interactions (e-mails, blogs, chats, etc) present major deviations from the norm of the language. This paper presents an comparat...
Catherine Kobus, François Yvon, Géra...
SSPR
2000
Springer
13 years 8 months ago
Some Notes on Twenty One (21) Nearest Prototype Classifiers
Comparisons made in two studies of 21 methods for finding prototypes upon which to base the nearest prototype classifier are discussed. The criteria used to compare the methods are...
James C. Bezdek, Ludmila Kuncheva
FLAIRS
2000
13 years 6 months ago
Overriding the Experts: A Stacking Method for Combining Marginal Classifiers
The design of an optimal Bayesian classifier for multiple features is dependent on the estimation of multidimensional joint probability density functions and therefore requires a ...
Mark D. Happel, Peter Bock
ICMLA
2008
13 years 6 months ago
Decision Tree Ensemble: Small Heterogeneous Is Better Than Large Homogeneous
Using decision trees that split on randomly selected attributes is one way to increase the diversity within an ensemble of decision trees. Another approach increases diversity by ...
Michael Gashler, Christophe G. Giraud-Carrier, Ton...