Sciweavers

Share
TNN
2010

On the weight convergence of Elman networks

8 years 6 months ago
On the weight convergence of Elman networks
Abstract--An Elman network (EN) can be viewed as a feedforward (FF) neural network with an additional set of inputs from the context layer (feedback from the hidden layer). Therefore, instead of the offline backpropagation-through-time (BPTT) algorithm, a standard online (real-time) backpropagation (BP) algorithm, usually called Elman BP (EBP), can be applied for EN training for discrete-time sequence predictions. However, the standard BP training algorithm is not the most suitable for ENs. A low learning rate can improve the training of ENs but can also result in very slow convergence speeds and poor generalization performance, whereas a high learning rate can lead to unstable training in terms of weight divergence. Therefore, an optimal or suboptimal tradeoff between training speed and weight convergence with good generalization capability is desired for ENs. This paper develops a robust extended EBP (eEBP) training algorithm for ENs with a new adaptive dead zone scheme based on eEBP...
Qing Song
Added 22 May 2011
Updated 22 May 2011
Type Journal
Year 2010
Where TNN
Authors Qing Song
Comments (0)
books