Sciweavers

ECML
2007
Springer

Learning from Relevant Tasks Only

13 years 10 months ago
Learning from Relevant Tasks Only
We extend our recent work on relevant subtask learning, a new variant of multitask learning where the goal is to learn a good classifier for a task-of-interest with too few training samples, by exploiting “supplementary data” from several other tasks. It is crucial to model the uncertainty about which of the supplementary data samples are relevant for the task-of-interest, that is, which samples are classified in the same way as in the task-of-interest. We have shown that the problem can be solved by careful mixture modeling: all tasks are modeled as mixtures of relevant and irrelevant samples, and the model for irrelevant samples is flexible enough so that the relevant model only needs to explain the relevant data. Previously we used simple maximum likelihood learning; now we extend the method to variational Bayes inference more suitable for high-dimensional data. We compare the method experimentally to a recent multi-task learning method and two naive methods.
Samuel Kaski, Jaakko Peltonen
Added 07 Jun 2010
Updated 07 Jun 2010
Type Conference
Year 2007
Where ECML
Authors Samuel Kaski, Jaakko Peltonen
Comments (0)