Sciweavers

Share
TCSV
2011

Concept-Driven Multi-Modality Fusion for Video Search

8 years 5 months ago
Concept-Driven Multi-Modality Fusion for Video Search
—As it is true for human perception that we gather information from different sources in natural and multi-modality forms, learning from multi-modalities has become an effective scheme for various information retrieval problems. In this paper, we propose a novel multi-modality fusion approach for video search, where the search modalities are derived from a diverse set of knowledge sources, such as text transcript from speech recognition, low-level visual features from video frames, and highlevel semantic visual concepts from supervised learning. Since the effectiveness of each search modality greatly depends on specific user queries, prompt determination of the importance of a modality to a user query is a critical issue in multi-modality search. Our proposed approach, named concept-driven multimodality fusion (CDMF), explores a large set of predefined semantic concepts for computing multi-modality fusion weights in a novel way. Specifically, in CDMF, we decompose the querymodalit...
Xiao-Yong Wei, Yu-Gang Jiang, Chong-Wah Ngo
Added 15 May 2011
Updated 15 May 2011
Type Journal
Year 2011
Where TCSV
Authors Xiao-Yong Wei, Yu-Gang Jiang, Chong-Wah Ngo
Comments (0)
books