Sciweavers

ECAI
2004
Springer

Vision-Language Integration in AI: A Reality Check

13 years 10 months ago
Vision-Language Integration in AI: A Reality Check
Abstract. Multimodal human to human interaction requires integration of the contents/meaning of the modalities involved. Artificial Intelligence (AI) multimodal prototypes attempt to go beyond technical integration of modalities to this kind of meaning integration that allows for coherent, natural, “intelligent” communication with humans. Though bringing many multimedia-related AI research fields together, integration and in particular vision-language integration is an issue that remains still in the background. In this paper, we attempt to make up for this lacuna by shedding some light on how, why and to what extent vision-language content integration takes place within AI. We present a taxonomy of vision-language integration prototypes which resulted from an extensive survey of such prototypes across a wide range of AI research areas and which uses a prototype’s integration purpose as the guiding criterion for classification. We look at the integration resources and mechanis...
Katerina Pastra, Yorick Wilks
Added 01 Jul 2010
Updated 01 Jul 2010
Type Conference
Year 2004
Where ECAI
Authors Katerina Pastra, Yorick Wilks
Comments (0)