Sciweavers

37 search results - page 1 / 8
» Focusing computational visual attention in multi-modal human...
Sort
View
ICMI
2010
Springer
217views Biometrics» more  ICMI 2010»
13 years 2 months ago
Focusing computational visual attention in multi-modal human-robot interaction
Identifying verbally and non-verbally referred-to objects is an important aspect of human-robot interaction. Most importantly, it is essential to achieve a joint focus of attentio...
Boris Schauerte, Gernot A. Fink
AROBOTS
2002
166views more  AROBOTS 2002»
13 years 4 months ago
Multi-Modal Interaction of Human and Home Robot in the Context of Room Map Generation
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment...
Saeed Shiry Ghidary, Yasushi Nakata, Hiroshi Saito...
ICPR
2006
IEEE
14 years 5 months ago
Human-Robot Interaction by Whole Body Gesture Spotting and Recognition
An intelligent robot is required for natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural Human-Robot Interaction (HRI). Previ...
A-Yeon Park, Hee-Deok Yang, Seong-Whan Lee
HRI
2009
ACM
13 years 11 months ago
Visual attention in spoken human-robot interaction
Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and productio...
Maria Staudte, Matthew W. Crocker
CHI
2001
ACM
14 years 5 months ago
Visual information foraging in a focus + context visualization
Eye tracking studies of the Hyperbolic Tree browser [10] suggest that visual search in focus+context displays is highly affected by information scent (i.e., local cues, such as te...
Peter Pirolli, Stuart K. Card, Mija M. Van Der Weg...