Abstract. We propose a framework that learns functional objectes from spatio-temporal data sets such as those abstracted from video. The data is represented as one activity graph t...
Muralikrishna Sridhar, Anthony G. Cohn, David C. H...
This work proposes a graph mining based approach to mine a taxonomy of events from activities for complex videos which are represented in terms of qualitative spatio-temporal relat...
Muralikrishna Sridhar, Anthony G. Cohn, David C. H...
We present a method for unsupervised learning of event classes from videos in which multiple actions might occur simultaneously. It is assumed that all such activities are produce...
Muralikrishna Sridhar, Anthony G. Cohn, David C. H...
We present a novel representation and method for detecting and explaining anomalous activities in a video stream. Drawing from natural language processing, we introduce a represen...
Raffay Hamid, Amos Y. Johnson, Samir Batta, Aaron ...
For video summarization and retrieval, one of the important modules is to group temporal-spatial coherent shots into high-level semantic video clips namely scene segmentation. In ...
Yanjun Zhao, Tao Wang, Peng Wang, Wei Hu, Yangzhou...