Sciweavers

641 search results - page 2 / 129
» Training Methods for Adaptive Boosting of Neural Networks
Sort
View
ICANN
2003
Springer
13 years 10 months ago
A Comparison of Model Aggregation Methods for Regression
Combining machine learning models is a means of improving overall accuracy.Various algorithms have been proposed to create aggregate models from other models, and two popular examp...
Zafer Barutçuoglu
AAAI
2004
13 years 6 months ago
Online Parallel Boosting
This paper presents a new boosting (arcing) algorithm called POCA, Parallel Online Continuous Arcing. Unlike traditional boosting algorithms (such as Arc-x4 and Adaboost), that co...
Jesse A. Reichler, Harlan D. Harris, Michael A. Sa...
IJCNN
2007
IEEE
13 years 11 months ago
Two-stage Multi-class AdaBoost for Facial Expression Recognition
— Although AdaBoost has achieved great success, it still suffers from following problems: (1) the training process could be unmanageable when the number of features is extremely ...
Hongbo Deng, Jianke Zhu, Michael R. Lyu, Irwin Kin...
ASC
2004
13 years 4 months ago
Neural network-based colonoscopic diagnosis using on-line learning and differential evolution
In this paper, on-line training of neural networks is investigated in the context of computer-assisted colonoscopic diagnosis. A memory-based adaptation of the learning rate for t...
George D. Magoulas, Vassilis P. Plagianakos, Micha...
BMCBI
2006
146views more  BMCBI 2006»
13 years 5 months ago
Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training
Background: Particle Swarm Optimization (PSO) is an established method for parameter optimization. It represents a population-based adaptive optimization technique that is influen...
Michael Meissner, Michael Schmuker, Gisbert Schnei...