Sciweavers

BC
2005

Learning visuomotor transformations for gaze-control and grasping

13 years 5 months ago
Learning visuomotor transformations for gaze-control and grasping
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to co-exist in the human brain. The first pathway uses directly the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and...
Heiko Hoffmann, Wolfram Schenck, Ralf Möller
Added 15 Dec 2010
Updated 15 Dec 2010
Type Journal
Year 2005
Where BC
Authors Heiko Hoffmann, Wolfram Schenck, Ralf Möller
Comments (0)