Sciweavers

Share
ICMCS
2005
IEEE

Speech-Based Visual Concept Learning Using Wordnet

11 years 7 months ago
Speech-Based Visual Concept Learning Using Wordnet
Modeling visual concepts using supervised or unsupervised machine learning approaches are becoming increasing important for video semantic indexing, retrieval, and filtering applications. Naturally, videos include multimodality data such as audio, speech, visual and text, which are combined to infer therein the overall semantic concepts. However, in the literature, most researches were conducted within only one single domain. In this paper we propose an unsupervised technique that builds context-independent keyword lists for desired visual concept modeling using WordNet. Furthermore, we propose an Extended Speech-based Visual Concept (ESVC) model to reorder and extend the above keyword lists by supervised learning based on multimodality annotation. Experimental results show that the context-independent models can achieve comparable performance compared to conventional supervised learning algorithms, and the ESVC model achieves about 53% and 28.4% improvement in two testing subsets of ...
Xiaodan Song, Ching-Yung Lin, Ming-Ting Sun
Added 24 Jun 2010
Updated 24 Jun 2010
Type Conference
Year 2005
Where ICMCS
Authors Xiaodan Song, Ching-Yung Lin, Ming-Ting Sun
Comments (0)
books