Sciweavers

79 search results - page 14 / 16
» A Multi-modal Sensing Framework for Human Activity Recogniti...
Sort
View
CVPR
2009
IEEE
15 years 5 months ago
Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos
Analyzing videos of human activities involves not only recognizing actions (typically based on their appearances), but also determining the story/plot of the video. The storyline ...
Abhinav Gupta, Praveen Srinivasan, Jianbo Shi, Lar...
CVPR
2009
IEEE
16 years 5 months ago
Dense saliency-based spatiotemporal feature points for action recognition
Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. Feature points are detected using a number of measures, namely sali...
Konstantinos Rapantzikos, Stefanos D. Kollias, Yan...
IUI
2003
ACM
15 years 3 months ago
Intelligent dialog overcomes speech technology limitations: the SENECa example
We present a primarily speech-based user interface to a wide range of entertainment, navigation and communication applications for use in vehicles. The multimodal dialog enables t...
Wolfgang Minker, Udo Haiber, Paul Heisterkamp, Sve...
NORDICHI
2006
ACM
15 years 4 months ago
A new role for anthropology?: rewriting "context" and "analysis" in HCI research
In this paper we want to reconsider the role anthropology (both its theory and methods) can play within HCI research. One of the areas anthropologists can contribute to here is to...
Minna Räsänen, James M. Nyce
CHI
2006
ACM
15 years 10 months ago
Cooperative gestures: multi-user gestural interactions for co-located groupware
Multi-user, touch-sensing input devices create opportunities for the use of cooperative gestures ? multi-user gestural interactions for single display groupware. Cooperative gestu...
Meredith Ringel Morris, Anqi Huang, Andreas Paepck...