Sciweavers

12 search results - page 2 / 3
» Visual Prosody: Facial Movements Accompanying Speech
Sort
View
TMM
2011
166views more  TMM 2011»
12 years 11 months ago
Audiovisual Discrimination Between Speech and Laughter: Why and When Visual Information Might Help
Past research on automatic laughter classification / detection has focused mainly on audio-based approaches. Here we present an audiovisual approach to distinguishing laughter fr...
Stavros Petridis, Maja Pantic
IJCV
2006
124views more  IJCV 2006»
13 years 4 months ago
Representation Analysis and Synthesis of Lip Images Using Dimensionality Reduction
Understanding facial expressions in image sequences is an easy task for humans. Some of us are capable of lipreading by interpreting the motion of the mouth. Automatic lipreading b...
Michal Aharon, Ron Kimmel
ICMI
2004
Springer
152views Biometrics» more  ICMI 2004»
13 years 10 months ago
Exploiting prosodic structuring of coverbal gesticulation
Although gesture recognition has been studied extensively, communicative, affective, and biometrical “utility” of natural gesticulation remains relatively unexplored. One of t...
Sanshzar Kettebekov
JOCN
2007
57views more  JOCN 2007»
13 years 4 months ago
Neural Correlates of Multisensory Integration of Ecologically Valid Audiovisual Events
A question that has emerged over recent years is whether audiovisual (AV) speech perception is a special case of multisensory perception. Electrophysiological (ERP) studies have f...
Jeroen J. Stekelenburg, Jean Vroomen
CIVR
2008
Springer
182views Image Analysis» more  CIVR 2008»
13 years 6 months ago
Fusion of audio and visual cues for laughter detection
Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we sh...
Stavros Petridis, Maja Pantic