Sciweavers

Share
CVPR
2011
IEEE

Extracting and Locating Temporal Motifs in Video Scenes Using a Hierarchical Non Parametric Bayesian Model

8 years 2 months ago
Extracting and Locating Temporal Motifs in Video Scenes Using a Hierarchical Non Parametric Bayesian Model
In this paper, we present an unsupervised method for mining activities in videos. From unlabeled video sequences of a scene, our method can automatically recover what are the recurrent temporal activity patterns (or motifs) and when they occur. Using non parametric Bayesian methods, we are able to automatically find both the underlying number of motifs and the number of motif occurrences in each document. The model’s robustness is first validated on synthetic data. It is then applied on a large set of video data from state-of-the-art papers. We show that it can effectively recover temporal activities with high semantics for humans and strong temporal information. The model is also used for prediction where it is shown to be as efficient as other approaches. Although illustrated on video sequences, this model can be directly applied to various kinds of time series where multiple activities occur simultaneously.
Ré, mi Emonet, Jagannadan Varadarajan, Jean-Marc
Added 08 Apr 2011
Updated 29 Apr 2011
Type Journal
Year 2011
Where CVPR
Authors Ré, mi Emonet, Jagannadan Varadarajan, Jean-Marc Odobez
Comments (0)
books