Sciweavers

28 search results - page 5 / 6
» Speech dialogue with facial displays
Sort
View
ICMI
2005
Springer
429views Biometrics» more  ICMI 2005»
13 years 11 months ago
A first evaluation study of a database of kinetic facial expressions (DaFEx)
In this paper we present DaFEx (Database of Facial Expressions), a database created with the purpose of providing a benchmark for the evaluation of the facial expressivity of Embo...
Alberto Battocchi, Fabio Pianesi, Dina Goren-Bar
ICMI
2007
Springer
161views Biometrics» more  ICMI 2007»
13 years 12 months ago
Detecting communication errors from visual cues during the system's conversational turn
Automatic detection of communication errors in conversational systems has been explored extensively in the speech community. However, most previous studies have used only acoustic...
Sy Bor Wang, David Demirdjian, Trevor Darrell
ITS
2010
Springer
176views Multimedia» more  ITS 2010»
13 years 7 months ago
A Time for Emoting: When Affect-Sensitivity Is and Isn't Effective at Promoting Deep Learning
We have developed and evaluated an affect-sensitive version of AutoTutor, a dialogue based ITS that simulates human tutors. While the original AutoTutor is sensitive to learners’...
Sidney K. D'Mello, Blair Lehman, Jeremiah Sullins,...
CW
2006
IEEE
13 years 12 months ago
An Interactive Mixed Reality Framework for Virtual Humans
In this paper, we present a simple and robust Mixed Reality (MR) framework that allows for real-time interaction with Virtual Humans in real and virtual environments under consist...
Arjan Egges, George Papagiannakis, Nadia Magnenat-...
CIVR
2008
Springer
182views Image Analysis» more  CIVR 2008»
13 years 7 months ago
Fusion of audio and visual cues for laughter detection
Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we sh...
Stavros Petridis, Maja Pantic