Sciweavers

ICONIP
2008

On Weight-Noise-Injection Training

13 years 5 months ago
On Weight-Noise-Injection Training
Abstract. While injecting weight noise during training has been proposed for more than a decade to improve the convergence, generalization and fault tolerance of a neural network, not much theoretical work has been done to its convergence proof and the objective function that it is minimizing. By applying the Gladyshev Theorem, it is shown that the convergence of injecting weight noise during training an RBF network is almost sure. Besides, the corresponding objective function is essentially the mean square errors (MSE). This objective function indicates that injecting weight noise during training an radial basis function (RBF) network is not able to improve fault tolerance. Despite this technique has been effectively applied to multilayer perceptron, further analysis on the expected update equation of training MLP with weight noise injection is presented. The performance difference between these two models by applying weight injection is discussed.
Kevin Ho, Chi-Sing Leung, John Sum
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2008
Where ICONIP
Authors Kevin Ho, Chi-Sing Leung, John Sum
Comments (0)