Sciweavers

551 search results - page 8 / 111
» Multimodal Speech Synthesis
Sort
View
ICMCS
2010
IEEE
164views Multimedia» more  ICMCS 2010»
14 years 10 months ago
Exploiting multimodal data fusion in robust speech recognition
This article introduces automatic speech recognition based on Electro-Magnetic Articulography (EMA). Movements of the tongue, lips, and jaw are tracked by an EMA device, which are...
Panikos Heracleous, Pierre Badin, Gérard Ba...
ADS
2004
Springer
15 years 3 months ago
Dimensional Emotion Representation as a Basis for Speech Synthesis with Non-extreme Emotions
Past attempts to model emotions for speech synthesis have focused on extreme, “basic” emotion categories. The present paper suggests an alternative representation of emotional ...
Marc Schröder
TSD
2010
Springer
14 years 7 months ago
Expressive Gibberish Speech Synthesis for Affective Human-Computer Interaction
In this paper we present our study on expressive gibberish speech synthesis as a means for affective communication between computing devices, such as a robot or an avatar, and thei...
Selma Yilmazyildiz, Lukas Latacz, Wesley Mattheyse...
ICASSP
2011
IEEE
14 years 1 months ago
HNM-based MFCC+F0 extractor applied to statistical speech synthesis
Currently, the statistical framework based on Hidden Markov Models (HMMs) plays a relevant role in speech synthesis, while voice conversion systems based on Gaussian Mixture Model...
Daniel Erro, Iñaki Sainz, Eva Navas, Inma H...
KI
2008
Springer
14 years 9 months ago
Enhancing Animated Agents in an Instrumented Poker Game
In this paper we present an interactive poker game in which one human user plays against two animated agents using RFID-tagged poker cards. The game is used as a showcase to illust...
Marc Schröder, Patrick Gebhard, Marcela Charf...