Sciweavers

COLT
2005
Springer

Variations on U-Shaped Learning

13 years 9 months ago
Variations on U-Shaped Learning
The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of algorithmic learning? Returning to wrong conjectures complements the paradigm of U-shaped learning [3,7,9,24,29] when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit from positive data: explanatory learning (when a learner stabilizes in the limit on a correct grammar) and behaviourally correct learning (when a learner stabilizes in the limit on a sequence of correct grammars representing the target concept). In both cases we show that returning to wrong conjectures is necessary to achieve full learning power. In contrast, one can modify learners (without losing learning power) such that they never show inverted U-shaped learning behaviour, that is, never return to old wrong conjecture with a correct conjecture in-between. Furthermore, one can also modify a learner (without losing learning power) such that it...
Lorenzo Carlucci, Sanjay Jain, Efim B. Kinber, Fra
Added 26 Jun 2010
Updated 26 Jun 2010
Type Conference
Year 2005
Where COLT
Authors Lorenzo Carlucci, Sanjay Jain, Efim B. Kinber, Frank Stephan
Comments (0)