Sciweavers

IJCNN
2000
IEEE

The Inefficiency of Batch Training for Large Training Sets

13 years 9 months ago
The Inefficiency of Batch Training for Large Training Sets
Multilayer perceptrons are often trained using error backpropagation (BP). BP training can be done in either a batch or continuous manner. Claims have frequently been made that batch training is faster and/or more "correct" than continuous training because it uses a better approximation of the true gradient for its weight updates. These claims are often supported by empirical evidence on very small data sets. These claims are untrue, however, for large training sets. This paper explains why batch training is much slower than continuous training for large training sets. Various levels of semi-batch training used on a 20,000-instance speech recognition task show a roughly linear increase in training time required with an increase in batch size.
D. Randall Wilson, Tony R. Martinez
Added 31 Jul 2010
Updated 31 Jul 2010
Type Conference
Year 2000
Where IJCNN
Authors D. Randall Wilson, Tony R. Martinez
Comments (0)