Sciweavers

IROS
2008
IEEE

Cross-modal body representation based on visual attention by saliency

13 years 10 months ago
Cross-modal body representation based on visual attention by saliency
Abstract— In performing various kinds of tasks, body representation is one of the most fundamental issues for physical agents (humans, primates, and robots). Especially during tooluse by Japanese macaque monkeys, neurophysiological evidence shows that the representation can be dynamically reconstructed by spatio-temporal integration of different sensor modalities so that it can be adaptive to environmental changes [1]. However, to construct such a representation, an issue to be solved is how to associate which information among various sensory data. This paper presents a method that constructs cross-modal body representation from vision, touch, and proprioception. When the robot touches something, the activation of tactile sense triggers the construction process of the visual receptive field for body parts that can be found by visual attention based on saliency map and consequently regarded as the end effector. Simultaneously, proprioceptive information is associated with this visua...
Mai Hikita, Sawa Fuke, Masaki Ogino, Minoru Asada
Added 31 May 2010
Updated 31 May 2010
Type Conference
Year 2008
Where IROS
Authors Mai Hikita, Sawa Fuke, Masaki Ogino, Minoru Asada
Comments (0)