Sciweavers

11 search results - page 1 / 3
» Visual Tracking Modalities for a Companion Robot
Sort
View
IROS
2006
IEEE
115views Robotics» more  IROS 2006»
13 years 10 months ago
Visual Tracking Modalities for a Companion Robot
Abstract— This article presents the development of a humanrobot interaction mechanism based on vision. The functionalities required for such mechanism range from user detection a...
Paulo Menezes, Frédéric Lerasle, Jor...
IAT
2007
IEEE
13 years 11 months ago
Towards Programming Multimodal Dialogues
This paper identifies several issues in multimodal dialogues between a companion robot and a human user. Specifically, these issues pertain to the synchronization of multimodal ...
Nieske L. Vergunst, Bas R. Steunebrink, Mehdi Dast...
CONTEXT
2005
Springer
13 years 10 months ago
Utilizing Visual Attention for Cross-Modal Coreference Interpretation
In this paper, we describe an exploratory study to develop a model of visual attention that could aid automatic interpretation of exophors in situated dialog. The model is intended...
Donna K. Byron, Thomas Mampilly, Vinay Sharma, Tia...
ICRA
2005
IEEE
183views Robotics» more  ICRA 2005»
13 years 10 months ago
Integration of Model-based and Model-free Cues for Visual Object Tracking in 3D
— Vision is one of the most powerful sensory modalities in robotics, allowing operation in dynamic environments. One of our long-term research interests is mobile manipulation, w...
Ville Kyrki, Danica Kragic
ICRA
2002
IEEE
112views Robotics» more  ICRA 2002»
13 years 9 months ago
Systems Integration for Real-World Manipulation Tasks
A system developed to demonstrate integration of a number of key research areas such as localization, recognition, visual tracking, visual servoing and grasping is presented toget...
Lars Petersson, Patric Jensfelt, Dennis Tell, M. S...