Sciweavers

Share
CVPR
2011
IEEE

Fast Unsupervised Ego-Action Learning for First-person Sports Videos

8 years 3 months ago
Fast Unsupervised Ego-Action Learning for First-person Sports Videos
Portable high-quality sports cameras (e.g. head or helmet mounted) built for recording dynamic first-person video footage are becoming a common item among many sports enthusiasts. We address the novel task of discovering firstperson action categories (which we call ego-actions) which can be useful for such tasks as video indexing and retrieval. In order to learn ego-action categories, we investigate the use of motion-based histograms and unsupervised learning algorithms to quickly cluster video content. Our approach assumes a completely unsupervised scenario, where labeled training videos are not available, videos are not pre-segmented and the number of ego-action categories are unknown. In our proposed framework we show that a stacked Dirichlet process mixture model can be used to automatically learn a motion histogram codebook and the set of ego-action categories. We quantitatively evaluate our approach on both in-house and public YouTube videos and demonstrate robust ego-action c...
Kris Kitani, Yoichi Sato, Takahiro Okabe, Akihiro
Added 24 Feb 2011
Updated 29 Apr 2011
Type Journal
Year 2011
Where CVPR
Authors Kris Kitani, Yoichi Sato, Takahiro Okabe, Akihiro Sugimoto
Comments (0)
books