Sciweavers

ICMI
2004
Springer

A multimodal learning interface for sketch, speak and point creation of a schedule chart

13 years 10 months ago
A multimodal learning interface for sketch, speak and point creation of a schedule chart
We present a video demonstration of an agent-based test bed application for ongoing research into multi-user, multimodal, computer-assisted meetings. The system tracks a two person scheduling meeting: one person standing at a touch sensitive whiteboard creating a Gantt chart, while another person looks on in view of a calibrated stereo camera. The stereo camera performs real-time, untethered, vision-based tracking of the onlooker’s head, torso and limb movements, which in turn are routed to a 3D-gesture recognition agent. Using speech, 3D deictic gesture and 2D object de-referencing the system is able to track the onlooker’s suggestion to move a specific milestone. The system also has a speech recognition agent capable of recognizing out-ofvocabulary (OOV) words as phonetic sequences. Thus when a user at the whiteboard speaks an OOV label name for a chart constituent while also writing it, the OOV speech is combined with letter sequences hypothesized by the handwriting recognizer ...
Edward C. Kaiser, David Demirdjian, Alexander Grue
Added 01 Jul 2010
Updated 01 Jul 2010
Type Conference
Year 2004
Where ICMI
Authors Edward C. Kaiser, David Demirdjian, Alexander Gruenstein, Xiaoguang Li, John Niekrasz, Matt Wesson, Sanjeev Kumar
Comments (0)