Sciweavers

ICMI
2009
Springer

Multi-modal features for real-time detection of human-robot interaction categories

13 years 11 months ago
Multi-modal features for real-time detection of human-robot interaction categories
Social interactions unfold over time, at multiple time scales, and can be observed through multiple sensory modalities. In this paper, we propose a machine learning framework for selecting and combining low-level sensory features from different modalities to produce high-level characterizations of human-robot social interactions in real-time. We introduce a novel set of fast, multi-modal, spatio-temporal features for audio sensors, touch sensors, floor sensors, laser range sensors, and the time-series history of the robot’s own behaviors. A subset of these features are automatically selected and combined using GentleBoost, an ensemble machine learning technique, allowing the robot to make an estimate of the current interaction category every 100 milliseconds. This information can then be used either by the robot to make decisions autonomously, or by a remote human-operator who can modify the robot’s behavior manually (i.e., semi-autonomous operation [5]). We demonstrate the tech...
Ian R. Fasel, Masahiro Shiomi, Pilippe-Emmanuel Ch
Added 26 May 2010
Updated 26 May 2010
Type Conference
Year 2009
Where ICMI
Authors Ian R. Fasel, Masahiro Shiomi, Pilippe-Emmanuel Chadutaud, Takayuki Kanda, Norihiro Hagita, Hiroshi Ishiguro
Comments (0)