Sciweavers

Share
ICIP
2003
IEEE

Top-down control of visual attention in object detection

11 years 1 months ago
Top-down control of visual attention in object detection
Current computational models of visual attention focus on bottom-up information and ignore scene context. However, studies in visual cognition show that humans use context to facilitate object detection in natural scenes by directing their attention or eyes to diagnostic regions. Here we propose a model of attention guidance based on global scene configuration. We show that the statistics of low-level features across the scene image determine where a specific object (e.g. a person) should be located. Human eye movements show that regions chosen by the top-down model agree with regions scrutinized by human observers performing a visual search task for people. The results validate the proposition that top-down information from visual context modulates the saliency of image regions during the task of object detection. Contextual information provides a shortcut for efficient object detection systems.
Aude Oliva, Antonio B. Torralba, Monica S. Castelh
Added 24 Oct 2009
Updated 27 Oct 2009
Type Conference
Year 2003
Where ICIP
Authors Aude Oliva, Antonio B. Torralba, Monica S. Castelhano, John M. Henderson
Comments (0)
books