Worst-Case Absolute Loss Bounds for Linear Learning Algorithms

13 years 7 months ago
Worst-Case Absolute Loss Bounds for Linear Learning Algorithms
The absolute loss is the absolute difference between the desired and predicted outcome. I demonstrateworst-case upper bounds on the absolute loss for the perceptron algorithm and an exponentiated update algorithm related to the Weighted Majority algorithm. The boundscharacterize the behaviorof the algorithms over any sequence of trials, where each trial consists of an example and a desired outcome interval (any value in the interval is an acceptable outcome). The worstcase absolute loss of both algorithms is bounded by: the absolute loss of the best linear function in the comparisonclass, plus a constant dependenton the initial weight vector, plus a per-trial loss. The per-trial loss can be eliminated if the learning algorithm is allowed a tolerance from the desired outcome. For concept learning, the worst-case bounds lead to mistake bounds that are comparable to previous results.
Tom Bylander
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 1997
Where AAAI
Authors Tom Bylander
Comments (0)