Sciweavers

551 search results - page 60 / 111
» Multimodal Speech Synthesis
Sort
View
ICDE
2006
IEEE
262views Database» more  ICDE 2006»
15 years 4 months ago
The eNTERFACE'05 Audio-Visual Emotion Database
This paper presents an audio-visual emotion database that can be used as a reference database for testing and evaluating video, audio or joint audio-visual emotion recognition alg...
O. Martin, Irene Kotsia, Benoit M. Macq, Ioannis P...
HCI
2007
14 years 11 months ago
Artificial Psychology
—In the field of human robot interaction (HRI), providing robot with emotions and psychology like human can be useful to achieve natural interaction. Previous HRI research focuse...
Zhiliang Wang
ISVC
2009
Springer
15 years 4 months ago
Speech-Driven Facial Animation Using a Shared Gaussian Process Latent Variable Model
Abstract. In this work, synthesis of facial animation is done by modelling the mapping between facial motion and speech using the shared Gaussian process latent variable model. Bot...
Salil Deena, Aphrodite Galata
CSL
1999
Springer
14 years 9 months ago
A hidden Markov-model-based trainable speech synthesizer
This paper presents a new approach to speech synthesis in which a set of cross-word decision-tree state-clustered context-dependent hidden Markov models are used to define a set o...
R. E. Donovan, Philip C. Woodland
SIGPRO
2008
101views more  SIGPRO 2008»
14 years 8 months ago
Line spectral pairs
A minimum generation error (MGE) criterion had been proposed to solve the issues related to maximum likelihood (ML) based HMM training in HMM-based speech synthesis. In this paper...
Ian Vince McLoughlin