Sciweavers

3250 search results - page 329 / 650
» Parameterized Learning Complexity
Sort
View
ICML
2007
IEEE
16 years 5 months ago
Gradient boosting for kernelized output spaces
A general framework is proposed for gradient boosting in supervised learning problems where the loss function is defined using a kernel over the output space. It extends boosting ...
Florence d'Alché-Buc, Louis Wehenkel, Pierr...
ICALT
2008
IEEE
15 years 11 months ago
A Framework for Semantic Group Formation
Collaboration has long been considered an effective approach to learning. However, forming optimal groups can be a time consuming and complex task. Different approaches have been ...
Asma Ounnas, Hugh C. Davis, David E. Millard
IJCNN
2007
IEEE
15 years 11 months ago
Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent
—The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose...
Ling Li, Hsuan-Tien Lin
ISNN
2007
Springer
15 years 11 months ago
Recurrent Fuzzy CMAC for Nonlinear System Modeling
Normal fuzzy CMAC neural network performs well because of its fast learning speed and local generalization capability for approximating nonlinear functions. However, it requires hu...
Floriberto Ortiz Rodriguez, Wen Yu, Marco A. Moren...
GECCO
2005
Springer
129views Optimization» more  GECCO 2005»
15 years 10 months ago
Post-processing clustering to reduce XCS variability
XCS is a stochastic algorithm, so it does not guarantee to produce the same results when run with the same input. When interpretability matters, obtaining a single, stable result ...
Flavio Baronti, Alessandro Passaro, Antonina Stari...