Sciweavers

Share
TRECVID
2008

Semantic Video Annotation using Background Knowledge and Similarity-based Video Retrieval

8 years 4 months ago
Semantic Video Annotation using Background Knowledge and Similarity-based Video Retrieval
We describe our experiments for the High-level Feature Extraction (FE) and Search (SE) tasks. We submitted two automatic runs to the FE task, the first one (MMIS alexei) was based on a probabilistic approach while the second (MMIS ainhoa) was an enhanced version that used background knowledge in the form of statistical co-occurrence of annotation keywords. While previous applications of this approach to other datasets have performed quite well, our results in TRECVID 2008 are not so good. In particular, the performance of the second run was limited by the small vocabulary. For the SE task we submitted two runs: a similarity-based media search (MMIS media) and the required text-only search (MMIS text). The similarity search, using media content, had better precision than the text-only search but had difficulties with some types of queries (e.g., motion-based). Overall, participation in the TRECVID evaluation was a valuable learning experience for our group.
Ainhoa Llorente, Srdan Zagorac, Suzanne Little, Ru
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2008
Where TRECVID
Authors Ainhoa Llorente, Srdan Zagorac, Suzanne Little, Rui Hu, Stefan M. Rüger, Anuj Kumar, Suhail Shaik, Xiang Ma
Comments (0)
books