A Computational Model of Early Auditory-Visual Integration

10 years 3 months ago
A Computational Model of Early Auditory-Visual Integration
We introduce a computational model of sensor fusion based on the topographic representations of a ”two-microphone and one camera” configuration. Our aim is to perform a robust multimodal attention-mechanism in artificial systems. In our approach, we consider neurophysiological findings to discuss the biological plausibility of the coding and extraction of spatial features, but also meet the demands and constraints of applications in the field of human-robot interaction. In contrast to the common technique of processing different modalities separately and finally combine multiple localization hypotheses, we integrate auditory and visual data on an early level. This can be considered as focusing the attention or controlling the gaze onto salient objects. Our computational model is inspired by findings about the inferior colliculus in the auditory pathway and the visual and multimodal sections of the superior colliculus. Accordingly it includes: a) an auditory map, based on inte...
Carsten Schauer, Horst-Michael Gross
Added 06 Jul 2010
Updated 06 Jul 2010
Type Conference
Year 2003
Where DAGM
Authors Carsten Schauer, Horst-Michael Gross
Comments (0)