We consider the origin of the high-dimensional input space as a variable which can be optimized before or during neuronal learning. This set of variables acts as a translation on ...
Daniel Remondini, Nathan Intrator, Gastone C. Cast...
Surprisingly simple local learning algorithms are known to outperform many other global non-linear machines. Unfortunately, these algorithms are computationally costly. A means of...
An efficient training method for block-diagonal recurrent neural networks is proposed. The method modifies the RPROP algorithm, originally developed for static models, in order to...
Paris A. Mastorocostas, Dimitris N. Varsamis, Cons...
We propose a data structure that decreases complexity of unsupervised competitive learning algorithms which are based on the growing cells structures approach. The idea is based on...
This paper studies on the mirror image learning algorithm for the autoassociative neural networks and evaluates the performance by handwritten numeral recognition test. Each of th...