Sciweavers

Share
ICMCS
2000
IEEE

Towards a Multimodal Meeting Record

8 years 10 months ago
Towards a Multimodal Meeting Record
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, and person identification. Recognition and integration of each of these modalities is important to create an accurate record of a meeting. However, each of these modalities presents recognition difficulties. Speech recognition must be speaker and domain independent, have low word error rates, and be close to real time to be useful. Gesture and handwriting recognition must be writer independent and support a wide variety of writing styles. Person identification has difficulty with segmentation in a crowded room. Furthermore, in order to produce the record automatically, we have to solve the assignment problem (who is saying what), which involves people identification and speech recognition. This paper will examine a multimodal meeting room system under development at Carnegie Mellon University that enables us to track, capture and integrate the important aspects of a meeting from peo...
Ralph Gross, Michael Bett, Hua Yu, Xiaojin Zhu, Yu
Added 31 Jul 2010
Updated 31 Jul 2010
Type Conference
Year 2000
Where ICMCS
Authors Ralph Gross, Michael Bett, Hua Yu, Xiaojin Zhu, Yue Pan, Jie Yang, Alex Waibel
Comments (0)
books