Sciweavers

IJIT
2004

A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks

13 years 5 months ago
A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton's methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting. Keywords-- Backpropagation algorithm, conjugacy condition, line search, matrix perturbation
Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2004
Where IJIT
Authors Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, Cemal Ardil
Comments (0)