Sciweavers

551 search results - page 28 / 111
» Multimodal Speech Synthesis
Sort
View
ACL
1998
14 years 11 months ago
Confirmation in Multimodal Systems
Systems that attempt to understand natural human input make mistakes, even humans. However, humans avoid misunderstandings by confirming doubtful input. Multimodal systems--those ...
David McGee, Philip R. Cohen, Sharon L. Oviatt
IJCV
2006
119views more  IJCV 2006»
14 years 10 months ago
Hand Motion Gesture Frequency Properties and Multimodal Discourse Analysis
Gesture and speech are co-expressive and complementary channels of a single human language system. While speech carries the major load of symbolic presentation, gesture provides th...
Yingen Xiong, Francis K. H. Quek
ICMCS
2000
IEEE
90views Multimedia» more  ICMCS 2000»
15 years 2 months ago
Towards a Multimodal Meeting Record
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, and person identification. Recognition and integration of each of these modalit...
Ralph Gross, Michael Bett, Hua Yu, Xiaojin Zhu, Yu...
CHI
2001
ACM
15 years 10 months ago
On the road and on the Web?: comprehension of synthetic and human speech while driving
In this study 24 participants drove a simulator while listening to three types of messages in both synthesized speech and recorded human speech. The messages consisted of short na...
Jennifer Lai, Karen Cheng, Paul Green, Omer Tsimho...
IROS
2009
IEEE
127views Robotics» more  IROS 2009»
15 years 4 months ago
Expressive facial speech synthesis on a robotic platform
— This paper presents our expressive facial speech synthesis system Eface, for a social or service robot. Eface aims at enabling a robot to deliver information clearly with empat...
Xingyan Li, Bruce MacDonald, Catherine I. Watson