Sciweavers

AGI
2011

Systematically Grounding Language through Vision in a Deep, Recurrent Neural Network

12 years 8 months ago
Systematically Grounding Language through Vision in a Deep, Recurrent Neural Network
Human intelligence consists largely of the ability to recognize and exploit structural systematicity in the world, relating our senses simultaneously to each other and to our cognitive state. Language abilities, in particular, require a learned mapping between the linguistic input and one’s internal model of the real world. In order to demonstrate that connectionist methods excel at this task, we teach a deep, recurrent neural network—a variant of the long short-term memory (LSTM)—to ground language in a micro-world. The network integrates two inputs—a visual scene and an auditory sentence—to produce the meaning of the sentence in the context of the scene. Crucially, the network exhibits strong systematicity, recovering appropriate meanings even for novel objects and descriptions. With its ability to exploit systematic structure across modalities, this network fulfills an important prerequisite of general machine intelligence.
Derek Monner, James A. Reggia
Added 24 Aug 2011
Updated 24 Aug 2011
Type Journal
Year 2011
Where AGI
Authors Derek Monner, James A. Reggia
Comments (0)