Sciweavers

AIHC
2007
Springer

Modeling Naturalistic Affective States Via Facial, Vocal, and Bodily Expressions Recognition

13 years 10 months ago
Modeling Naturalistic Affective States Via Facial, Vocal, and Bodily Expressions Recognition
Affective and human-centered computing have attracted a lot of attention during the past years, mainly due to the abundance of devices and environments able to exploit multimodal input from the part of the users and adapt their functionality to their preferences or individual habits. In the quest to receive feedback from the users in an unobtrusive manner, the combination of facial and hand gestures with prosody information allows us to infer the users’ emotional state, relying on the best performing modality in cases where one modality suffers from noise or bad sensing conditions. In this paper, we describe a multi-cue, dynamic approach to detect emotion in naturalistic video sequences. Contrary to strictly controlled recording conditions of audiovisual material, the proposed approach focuses on sequences taken from nearly real world situations. Recognition is performed via a ‘Simple Recurrent Network’ which lends itself well to modeling dynamic events in both user’s facial ex...
Kostas Karpouzis, George Caridakis, Loïc Kess
Added 07 Jun 2010
Updated 07 Jun 2010
Type Conference
Year 2007
Where AIHC
Authors Kostas Karpouzis, George Caridakis, Loïc Kessous, Noam Amir, Amaryllis Raouzaiou, Lori Malatesta, Stefanos D. Kollias
Comments (0)