The current research presents a system that learns to understand object names, spatial relation terms and event descriptions from observing narrated action sequences. The system e...
Computational models of grounded language learning have been based on the premise that words and concepts are learned simultaneously. Given the mounting cognitive evidence for conc...
Motivated by the need for an informative, unbiased and quantitative perceptual method for the development and evaluation of a talking head we are developing, we propose a new test...
Darren Cosker, Susan Paddock, A. David Marshall, P...
This work is about the relevance of Gibson’s concept of affordances [1] for visual perception in interactive and autonomous robotic systems. In extension to existing functional ...
Gerald Fritz, Lucas Paletta, Ralph Breithaupt, Eri...
We propose Action-Reaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between...