Sciweavers

1601 search results - page 30 / 321
» Learning Gaussian processes from multiple tasks
Sort
View
EMNLP
2009
15 years 29 days ago
Active Learning by Labeling Features
Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper,...
Gregory Druck, Burr Settles, Andrew McCallum
ICML
2008
IEEE
16 years 4 months ago
A unified architecture for natural language processing: deep neural networks with multitask learning
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity...
Ronan Collobert, Jason Weston
PAMI
2008
140views more  PAMI 2008»
15 years 3 months ago
Simplifying Mixture Models Using the Unscented Transform
Mixture of Gaussians (MoG) model is a useful tool in statistical learning. In many learning processes that are based on mixture models, computational requirements are very demandin...
Jacob Goldberger, Hayit Greenspan, Jeremie Dreyfus...
ECAI
2004
Springer
15 years 8 months ago
Combining Multiple Answers for Learning Mathematical Structures from Visual Observation
Learning general truths from the observation of simple domains and, further, learning how to use this knowledge are essential capabilities for any intelligent agent to understand ...
Paulo Santos, Derek R. Magee, Anthony G. Cohn, Dav...
ICIAP
1999
ACM
15 years 7 months ago
Learning Visual Operators from Examples: A New Paradigm in Image Processing
This paper presents a general strategy for designing efficient visual operators. The approach is highly task oriented and what constitutes the relevant information is defined by...
Hans Knutsson, Magnus Borga