Sciweavers

Share
ICIP
2009
IEEE

Combining Multimodal And Temporal Contextual Information For Semantic Video Analysis

12 years 6 months ago
Combining Multimodal And Temporal Contextual Information For Semantic Video Analysis
In this paper, a graphical modeling-based approach to semantic video analysis is presented for jointly realizing modality fusion and temporal context exploitation. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for every modality. Subsequently, an integrated Bayesian Network (BN) is introduced for simultaneously performing information fusion and temporal contextual knowledge exploitation, contrary to the usual practice of performing each task separately. The final outcome of the overall video analysis approach is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach in the domain of news broadcast video are presented.
Added 10 Nov 2009
Updated 21 Dec 2009
Type Conference
Year 2009
Where ICIP
Comments (0)
books