Sciweavers

Share
AGI
2011

Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

8 years 6 months ago
Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model
Abstract. Humans have the remarkable ability to generalize from binocular to monocular figure-ground segmentation of complex scenes. This is clearly evident anytime we look at a photograph, computer monitor or simply close one eye. We hypothesized that this skill is due to of the ability of our brains to use rich embodied signals, such as disparity, to train up depth perception when only the information from one eye is available. In order to test this hypothesis we enhanced our virtual robot, Emer, who is already capable of performing robust, state-of-the-art, invariant 3D object recognition [1], with the ability to learn figure-ground segmentation, allowing him to recognize objects against complex backgrounds. Continued development of this skill holds great promise for efforts, like Emer, that aim to create an Artificial General Intelligence (AGI). For example, it promises to unlock vast sets of training data, such as Google Images, which have previously been inaccessible to AGI m...
Brian Mingus, Trent Kriete, Seth A. Herd, Dean Wya
Added 24 Aug 2011
Updated 24 Aug 2011
Type Journal
Year 2011
Where AGI
Authors Brian Mingus, Trent Kriete, Seth A. Herd, Dean Wyatte, Kenneth Latimer, Randy O'Reilly
Comments (0)
books