Sciweavers

551 search results - page 25 / 111
» Multimodal Speech Synthesis
Sort
View
ACL
2007
14 years 11 months ago
A Multimodal Interface for Access to Content in the Home
In order to effectively access the rapidly increasing range of media content available in the home, new kinds of more natural interfaces are needed. In this paper, we explore the ...
Michael Johnston, Luis Fernando D'Haro, Michelle L...
ISM
2008
IEEE
136views Multimedia» more  ISM 2008»
15 years 4 months ago
Multimodal Speaker Segmentation in Presence of Overlapped Speech Segments
We propose a multimodal speaker segmentation algorithm with two main contributions: First, we suggest a hidden Markov model architecture that performs fusion of the three modaliti...
Viktor Rozgic, Kyu Jeong Han, Panayiotis G. Georgi...
IUI
2004
ACM
15 years 3 months ago
Speech and sketching for multimodal design
While sketches are commonly and effectively used in the early stages of design, some information is far more easily conveyed verbally than by sketching. In response, we have combi...
Aaron Adler, Randall Davis
INTERSPEECH
2010
14 years 4 months ago
Speech synthesis by modeling harmonics structure with multiple function
In this paper, we present a new approach for the speech synthesis, in which speech utterances are synthesized using the parameters of spectro-modeling function (Multiple function)...
Toru Nakashika, Ryuki Tachibana, Masafumi Nishimur...
HCI
2007
14 years 11 months ago
Unobtrusive Multimodal Emotion Detection in Adaptive Interfaces: Speech and Facial Expressions
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a com...
Khiet P. Truong, David A. van Leeuwen, Mark A. Nee...