Sciweavers

29 search results - page 3 / 6
» Multimodal emotion recognition from expressive faces, body g...
Sort
View
AIHC
2007
Springer
13 years 11 months ago
Modeling Naturalistic Affective States Via Facial, Vocal, and Bodily Expressions Recognition
Affective and human-centered computing have attracted a lot of attention during the past years, mainly due to the abundance of devices and environments able to exploit multimodal i...
Kostas Karpouzis, George Caridakis, Loïc Kess...
ICCV
2005
IEEE
13 years 11 months ago
Tracking Body Parts of Multiple People for Multi-person Multimodal Interface
Although large displays could allow several users to work together and to move freely in a room, their associated interfaces are limited to contact devices that must generally be s...
Sébastien Carbini, Jean-Emmanuel Viallet, O...
PRIMA
2009
Springer
13 years 12 months ago
An Adaptive Agent Model for Emotion Reading by Mirroring Body States and Hebbian Learning
In recent years, the topic of emotion reading has increasingly received attention from researchers in Cognitive Science and Artificial Intelligence. To study this phenomenon, in th...
Tibor Bosse, Zulfiqar A. Memon, Jan Treur
ICMCS
2006
IEEE
187views Multimedia» more  ICMCS 2006»
13 years 11 months ago
Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis
Multimodal speech and speaker modeling and recognition are widely accepted as vital aspects of state of the art human-machine interaction systems. While correlations between speec...
Mehmet Emre Sargin, Oya Aran, Alexey Karpov, Ferda...
NORDICHI
2006
ACM
13 years 11 months ago
The FaceReader: measuring instant fun of use
Recently, more and more attention has been paid to emotions in the domain of Human-Computer Interaction. When evaluating a product, one can no longer ignore the emotions a product...
Bieke Zaman, Tara Shrimpton-Smith