Sciweavers

551 search results - page 22 / 111
» Multimodal Speech Synthesis
Sort
View
ACMDIS
2008
ACM
14 years 12 months ago
Exploring true multi-user multimodal interaction over a digital table
True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design spa...
Edward Tse, Saul Greenberg, Chia Shen, Clifton For...
CHI
2005
ACM
15 years 10 months ago
Non-speech sound and paralinguistic parameters in mobile speech applications
This paper describes the background, research questions and methodology of user studies on the integration of nonspeech sound and paralinguistic parameters in speechenabled applic...
Peter Fröhlich
PAMI
2008
159views more  PAMI 2008»
14 years 9 months ago
Analysis of Head Gesture and Prosody Patterns for Prosody-Driven Head-Gesture Animation
We propose a new two-stage framework for joint analysis of head gesture and speech prosody patterns of a speaker toward automatic realistic synthesis of head gestures from speech p...
Mehmet Emre Sargin, Yücel Yemez, Engin Erzin,...
INTERSPEECH
2010
14 years 4 months ago
Text-based unstressed syllable prediction in Mandarin
Recently, an increasing attention has been paid to Mandarin word stress which is important for improving the naturalness of speech synthesis. Most of the research on Mandarin spee...
Ya Li, Jianhua Tao, Meng Zhang, Shifeng Pan, Xiaoy...
CHI
2007
ACM
15 years 10 months ago
How pairs interact over a multimodal digital table
Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how...
Edward Tse, Chia Shen, Saul Greenberg, Clifton For...