Sciweavers

ICMI
2004
Springer
263views Biometrics» more  ICMI 2004»
13 years 9 months ago
Analysis of emotion recognition using facial expressions, speech and multimodal information
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although ...
Carlos Busso, Zhigang Deng, Serdar Yildirim, Murta...
ICMI
2004
Springer
148views Biometrics» more  ICMI 2004»
13 years 9 months ago
ICARE software components for rapidly developing multimodal interfaces
Although several real multimodal systems have been built, their development still remains a difficult task. In this paper we address this problem of development of multimodal inte...
Jullien Bouchet, Laurence Nigay, Thierry Ganille
ICMI
2004
Springer
159views Biometrics» more  ICMI 2004»
13 years 9 months ago
A segment-based audio-visual speech recognizer: data collection, development, and initial experiments
This paper presents the development and evaluation of a speaker-independent audio-visual speech recognition (AVSR) system that utilizes a segment-based modeling strategy. To suppo...
Timothy J. Hazen, Kate Saenko, Chia-Hao La, James ...
ICMI
2004
Springer
142views Biometrics» more  ICMI 2004»
13 years 9 months ago
Multimodal interaction for distributed collaboration
We demonstrate a same-time different-place collaboration system for managing crisis situations using geospatial information. Our system enables distributed spatial decision-making...
Levent Bolelli, Guoray Cai, Hongmei Wang, Bita Mor...
ICMI
2004
Springer
117views Biometrics» more  ICMI 2004»
13 years 9 months ago
AROMA: ambient awareness through olfaction in a messaging application
This work explores the properties of different output modalities as notification mechanisms in the context of messaging. In particular, the olfactory (smell) modality is introdu...
Adam Bodnar, Richard Corbett, Dmitry Nekrasovski
ICMI
2004
Springer
151views Biometrics» more  ICMI 2004»
13 years 9 months ago
Multimodal model integration for sentence unit detection
In this paper, we adopt a direct modeling approach to utilize conversational gesture cues in detecting sentence boundaries, called SUs, in video taped conversations. We treat the ...
Mary P. Harper, Elizabeth Shriberg
ICMI
2004
Springer
196views Biometrics» more  ICMI 2004»
13 years 9 months ago
Evaluation of spoken multimodal conversation
Spoken multimodal dialogue systems in which users address faceonly or embodied interface agents have been gaining ground in research for some time. Although most systems are still...
Niels Ole Bernsen, Laila Dybkjær
ICMI
2004
Springer
183views Biometrics» more  ICMI 2004»
13 years 9 months ago
Agent and library augmented shared knowledge areas (ALASKA)
This paper reports on an NSF-funded effort now underway to integrate three learning technologies that have emerged and matured over the past decade; each has presented compelling ...
Eric R. Hamilton
ICMI
2004
Springer
120views Biometrics» more  ICMI 2004»
13 years 9 months ago
M/ORIS: a medical/operating room interaction system
We propose an architecture for a real-time multimodal system, which provides non-contact, adaptive user interfacing for Computer-Assisted Surgery (CAS). The system, called M/ORIS ...
Sébastien Grange, Terrence Fong, Charles Ba...
ICMI
2004
Springer
148views Biometrics» more  ICMI 2004»
13 years 9 months ago
Multimodal interface platform for geographical information systems (GeoMIP) in crisis management
A novel interface system for accessing geospatial data (GeoMIP) has been developed that realizes a user-centered multimodal speech/gesture interface for addressing some of the cri...
Pyush Agrawal, Ingmar Rauschert, Keerati Inochanon...