Sciweavers

AMR
2005
Springer

Learning User Queries in Multimodal Dissimilarity Spaces

13 years 9 months ago
Learning User Queries in Multimodal Dissimilarity Spaces
Abstract. Different strategies to learn user semantic queries from dissimilarity representations of video audio-visual content are presented. When dealing with large corpora of videos documents, using a feature representation requires the online computation of distances between all documents and a query. Hence, a dissimilarity representation may be preferred because its offline computation speeds up the retrieval process. We show how distances related to visual and audio video features can directly be used to learn complex concepts from a set of positive and negative examples provided by the user. Based on the idea of dissimilarity spaces, we derive three algorithms to fuse modalities and therefore to enhance the precision of retrieval results. The evaluation of our technique is performed on artificial data and on the complete annotated TRECVID corpus.
Eric Bruno, Nicolas Moënne-Loccoz, Sté
Added 26 Jun 2010
Updated 26 Jun 2010
Type Conference
Year 2005
Where AMR
Authors Eric Bruno, Nicolas Moënne-Loccoz, Stéphane Marchand-Maillet
Comments (0)