Sciweavers

551 search results - page 33 / 111
» Multimodal Speech Synthesis
Sort
View
BCSHCI
2008
14 years 11 months ago
Efficiency of multimodal metaphors in the presentation of learning information
The comparative study described in this paper has been conducted to investigate the effect of including multimodal metaphors on the usability of e-learning interfaces. Two indepen...
Marwan Alseid, Dimitrios Rigas
CHI
2010
ACM
15 years 4 months ago
Speech dasher: fast writing using speech and gaze
Speech Dasher allows writing using a combination of speech and a zooming interface. Users first speak what they want to write and then they navigate through the space of recognit...
Keith Vertanen, David J. C. MacKay
ECCV
1998
Springer
15 years 2 months ago
Continuous Audio-Visual Speech Recognition
The Multi-Stream automatic speech recognition approach was investigated in this work as a framework for Audio-Visual data fusion and speech recognition. This method presents many ...
Juergen Luettin, Stéphane Dupont
HICSS
2006
IEEE
134views Biometrics» more  HICSS 2006»
15 years 4 months ago
Patterns of Multimodal Input Usage in Non-Visual Information Navigation
Multimodal input is known to be advantageous for graphical user interfaces, but its benefits for non-visual interaction are unknown. To explore this issue, an exploratory study wa...
Xiaoyu Chen, Marilyn Tremaine
LREC
2010
183views Education» more  LREC 2010»
14 years 11 months ago
AhoTransf: A Tool for Multiband Excitation Based Speech Analysis and Modification
In this paper we present AhoTransf, a tool that enables analysis, visualization, modification and synthesis of speech. AhoTransf integrates a speech signal analysis model with a g...
Ibon Saratxaga, Inmaculada Hernáez, Eva Nav...