The majority of work described in this paper was conducted as part of the Recovering Evidence from Video by fusing Video Evidence Thesaurus and Video MetaData (REVEAL) project, sp...
ion when we annotate content. This therefore requires us to investigate and model video semantics. Because of the type and volume of data, general-purpose approaches are likely to ...
Automatic semantic annotation of sports video requires that the domain knowledge is properly included and exploited in the annotation process and that low and intermediate-level f...
This paper presents a corpus of annotated motion events and their event structure. We consider motion events triggered by a set of motion evoking words and contemplate both litera...
Kirk Roberts, Srikanth Gullapalli, Cosmin Adrian B...
Abstract--This paper presents a novel method for automatically classifying consumer video clips based on their soundtracks. We use a set of 25 overlapping semantic classes, chosen ...