Content-based image search on the Internet is a challenging problem, mostly due to the semantic gap between low-level visual features and high-level content, as well as the excess...
In this paper we present the common effort of Lear and XRCE for the ImageCLEF Visual Concept Detection and Annotation Task. We first sought to combine our individual state-of-the-a...
Thomas Mensink, Gabriela Csurka, Florent Perronnin...
Abstract. In this paper, we describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign ...
Our experiments in TRECVID 2007 include participation in the high-level feature extraction, search, and video summarization tasks, using a common system framework based on multipl...
This paper describes our first participation in TRECVID. We took part in the search task and submitted two interactive runs. Both of them are of Type c, and use no ASR/MT output ...
We propose an unsupervised approach to learn associations between continuous-valued attributes from different modalities. These associations are used to construct a multi-modal t...
Content-based image suggestion (CBIS) targets the recommendation of products based on user preferences on the visual content of images. In this paper, we motivate both feature sel...
In this paper, we propose a novel graph embedding method for the problem of lipreading. To characterize the temporal connections among video frames of the same utterance, a new di...
Searching for images by using low-level visual features, such as color and texture, is known to be a powerful, yet imprecise, retrieval paradigm. The same is true if search relies...
Abstract. The aim of this paper is to introduce a novel, biologically inspired approach to extract visual features relevant for controlling and understanding reachto-grasp actions....