Sciweavers

14 search results - page 2 / 3
» Learning Perceptual Causality from Video
Sort
View
AI
2005
Springer
13 years 4 months ago
Learning to talk about events from narrated video in a construction grammar framework
The current research presents a system that learns to understand object names, spatial relation terms and event descriptions from observing narrated action sequences. The system e...
Peter Ford Dominey, Jean-David Boucher
WAPCV
2007
Springer
13 years 11 months ago
Language Label Learning for Visual Concepts Discovered from Video Sequences
Computational models of grounded language learning have been based on the premise that words and concepts are learned simultaneously. Given the mounting cognitive evidence for conc...
Prithwijit Guha, Amitabha Mukerjee
APGV
2004
ACM
176views Visualization» more  APGV 2004»
13 years 10 months ago
Towards perceptually realistic talking heads: models, methods and McGurk
Motivated by the need for an informative, unbiased and quantitative perceptual method for the development and evaluation of a talking head we are developing, we propose a new test...
Darren Cosker, Susan Paddock, A. David Marshall, P...
IROS
2006
IEEE
127views Robotics» more  IROS 2006»
13 years 11 months ago
Learning Predictive Features in Affordance based Robotic Perception Systems
This work is about the relevance of Gibson’s concept of affordances [1] for visual perception in interactive and autonomous robotic systems. In extension to existing functional ...
Gerald Fritz, Lucas Paletta, Ralph Breithaupt, Eri...
ICVS
1999
Springer
13 years 9 months ago
Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour
We propose Action-Reaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between...
Tony Jebara, Alex Pentland