Sciweavers

95 search results - page 11 / 19
» Speech and sketching for multimodal design
Sort
View
ICMI
2004
Springer
148views Biometrics» more  ICMI 2004»
15 years 5 months ago
A framework for evaluating multimodal integration by humans and a role for embodied conversational agents
One of the implicit assumptions of multi-modal interfaces is that human-computer interaction is significantly facilitated by providing multiple input and output modalities. Surpri...
Dominic W. Massaro
ICMI
2004
Springer
281views Biometrics» more  ICMI 2004»
15 years 5 months ago
Articulatory features for robust visual speech recognition
Visual information has been shown to improve the performance of speech recognition systems in noisy acoustic environments. However, most audio-visual speech recognizers rely on a ...
Kate Saenko, Trevor Darrell, James R. Glass
CHI
2009
ACM
16 years 8 days ago
City browser: developing a conversational automotive HMI
This paper introduces City Browser, a prototype multimodal, conversational, spoken language interface for automotive navigational aid and information access. A study designed to e...
Alexander Gruenstein, Bruce Mehler, Bryan Reimer, ...
AI
2005
Springer
14 years 11 months ago
Semiotic schemas: A framework for grounding language in action and perception
A theoretical framework for grounding language is introduced that provides a computational path from sensing and motor action to words and speech acts. The approach combines conce...
Deb Roy
JOCN
2010
97views more  JOCN 2010»
14 years 10 months ago
Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model
■ Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate...
Marco Loh, Gabriele Schmid, Gustavo Deco, Wolfram ...