Sciweavers

641 search results - page 30 / 129
» Training Methods for Adaptive Boosting of Neural Networks
Sort
View
ICONIP
2008
14 years 11 months ago
On Node-Fault-Injection Training of an RBF Network
Abstract. While injecting fault during training has long been demonstrated as an effective method to improve fault tolerance of a neural network, not much theoretical work has been...
John Sum, Chi-Sing Leung, Kevin Ho
NPL
2000
135views more  NPL 2000»
14 years 9 months ago
Towards the Optimal Learning Rate for Backpropagation
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate is derived. The algorithm is based upon minimising the instantaneous output erro...
Danilo P. Mandic, Jonathon A. Chambers
JMLR
2006
389views more  JMLR 2006»
14 years 9 months ago
A Very Fast Learning Method for Neural Networks Based on Sensitivity Analysis
This paper introduces a learning method for two-layer feedforward neural networks based on sensitivity analysis, which uses a linear training algorithm for each of the two layers....
Enrique Castillo, Bertha Guijarro-Berdiñas,...
ML
2008
ACM
222views Machine Learning» more  ML 2008»
14 years 9 months ago
Boosted Bayesian network classifiers
The use of Bayesian networks for classification problems has received significant recent attention. Although computationally efficient, the standard maximum likelihood learning me...
Yushi Jing, Vladimir Pavlovic, James M. Rehg
GECCO
2003
Springer
153views Optimization» more  GECCO 2003»
15 years 2 months ago
SEPA: Structure Evolution and Parameter Adaptation in Feed-Forward Neural Networks
Abstract. In developing algorithms that dynamically changes the structure and weights of ANN (Artificial Neural Networks), there must be a proper balance between network complexit...
Paulito P. Palmes, Taichi Hayasaka, Shiro Usui