Sciweavers

Share
NIPS
1994

From Data Distributions to Regularization in Invariant Learning

8 years 9 months ago
From Data Distributions to Regularization in Invariant Learning
Ideally pattern recognition machines provide constant output when the inputs are transformed under a group G of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of G, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed under the group. This paper relates the two approaches, showing precisely the sense in which the regularized cost function approximates the result of adding transformed (or distorted) examples to the training data. The cost function for the enhanced training set is equivalent to the sum of the original cost function plus a regularizer. For unbiased models, the regularizer reduces to the intuitively obvious choice { a term that penalizes changes in the output when the inputs are transformed under the group. For in nitesimal transformations...
Todd K. Leen
Added 02 Nov 2010
Updated 02 Nov 2010
Type Conference
Year 1994
Where NIPS
Authors Todd K. Leen
Comments (0)
books