Sciweavers

NIPS
1996

Solving the Ill-Conditioning in Neural Network Learning

13 years 5 months ago
Solving the Ill-Conditioning in Neural Network Learning
Abstract. In this paper we investigate the feed-forward learning problem. The well-known ill-conditioning which is present in most feed-forward learning problems is shown to be the result of the structure of the network. Also, the well-known problem that weights between `higher' layers in the network have to settle before `lower' weights can converge is addressed. We present a solution to these problems by modifying the structure of the network through the addition of linear connections which carry shared weights. We call the new network structure the linearly augmented feed-forward network, and it is shown that the universal approximation theorems are still valid. Simulation experiments show the validity of the new method, and demonstrate that the new network is less sensitive to local minima and learns faster than the original network.
P. Patrick van der Smagt, Gerd Hirzinger
Added 02 Nov 2010
Updated 02 Nov 2010
Type Conference
Year 1996
Where NIPS
Authors P. Patrick van der Smagt, Gerd Hirzinger
Comments (0)