Sciweavers

551 search results - page 20 / 111
» Multimodal Speech Synthesis
Sort
View
ACHI
2008
IEEE
15 years 4 months ago
Multimodal Metric Study for Human-Robot Collaboration
The aim of our research is to create a system whereby human members of a team can collaborate in a natural way with robots. In this paper we describe a Wizard of Oz (WOZ) study co...
Scott Green, Scott Richardson, Randy Stiles, Mark ...
ICASSP
2011
IEEE
14 years 1 months ago
HMM-based speech synthesiser using the LF-model of the glottal source
A major factor which causes a deterioration in speech quality in HMM-based speech synthesis is the use of a simple delta pulse signal to generate the excitation of voiced speech. ...
João P. Cabral, Steve Renals, Junichi Yamag...
SPEECH
1998
83views more  SPEECH 1998»
14 years 9 months ago
A stochastic model of intonation for text-to-speech synthesis
This paper presents a stochastic model of intonation contours for use in text-to-speech s. The model has two modules, a linguistic module that generates abstract prosodic rom text...
Jean Véronis, Philippe Di Cristo, Fabienne ...
BIOSTEC
2011
253views Healthcare» more  BIOSTEC 2011»
13 years 9 months ago
On the Benefits of Speech and Touch Interaction with Communication Services for Mobility Impaired Users
Although technology for communication has evolved tremendously over the past decades, mobility impaired individuals still face many difficulties interacting with communication serv...
Carlos Galinho Pires, Fernando Miguel Pinto, Eduar...
MLMI
2005
Springer
15 years 3 months ago
VACE Multimodal Meeting Corpus
Abstract. In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, w...
Lei Chen 0004, R. Rose, Ying Qiao, Irene Kimbara, ...