Sciweavers

551 search results - page 34 / 111
» Multimodal Speech Synthesis
Sort
View
TVCG
2012
162views Hardware» more  TVCG 2012»
13 years 9 days ago
A Statistical Quality Model for Data-Driven Speech Animation
—In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism o...
Xiaohan Ma, Zhigang Deng
PRL
2002
94views more  PRL 2002»
14 years 9 months ago
A hierarchical tag-graph search scheme with layered grammar rules for spontaneous speech understanding
It has always been difficult for language understanding systems to handle spontaneous speech with satisfactory robustness, primarily due to such problems as the fragments, disflue...
Bor-shen Lin, Berlin Chen, Hsin-Min Wang, Lin-Shan...
JVCA
2006
134views more  JVCA 2006»
14 years 9 months ago
Multimodal expression in virtual humans
This work proposes a real-time virtual human multimodal expression model. Five modalities explore the affordances of the body: deterministic, non-deterministic, gesticulation, faci...
Celso de Melo, Ana Paiva
HICSS
2007
IEEE
154views Biometrics» more  HICSS 2007»
15 years 4 months ago
Gulliver-A Framework for Building Smart Speech-Based Applications
Speech recognition has matured over the past years to the point that companies can seriously consider its use. However, from a developer’s perspective we observe that speech inp...
Werner Kurschl, Stefan Mitsch, Rene Prokop, Johann...
CHI
1995
ACM
15 years 1 months ago
A Generic Platform for Addressing the Multimodal Challenge
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech and direct manipulation. The flexibility they offer results in an incr...
Laurence Nigay, Joëlle Coutaz