Sciweavers

137 search results - page 26 / 28
» Real-Time Facial Expression Recognition for Natural Interact...
Sort
View
93
Voted
AIHC
2007
Springer
15 years 4 months ago
Gaze-X: Adaptive, Affective, Multimodal Interface for Single-User Office Scenarios
This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user’s actions and emotions are modele...
Ludo Maat, Maja Pantic
85
Voted
ICMI
2005
Springer
170views Biometrics» more  ICMI 2005»
15 years 3 months ago
Inferring body pose using speech content
Untethered multimodal interfaces are more attractive than tethered ones because they are more natural and expressive for interaction. Such interfaces usually require robust vision...
Sy Bor Wang, David Demirdjian
WSDM
2010
ACM
172views Data Mining» more  WSDM 2010»
15 years 7 months ago
Early Online Identification of Attention Gathering Items In Social Media
Activity in social media such as blogs, micro-blogs, social networks, etc is manifested via interaction that involves text, images, links and other information items. Naturally, s...
Michael Mathioudakis, Nick Koudas, Peter Marbach
72
Voted
AVI
2008
15 years 16 days ago
Exploring emotions and multimodality in digitally augmented puppeteering
Recently, multimodal and affective technologies have been adopted to support expressive and engaging interaction, bringing up a plethora of new research questions. Among the chall...
Lassi A. Liikkanen, Giulio Jacucci, Eero Huvio, To...
72
Voted
HRI
2006
ACM
15 years 4 months ago
Using context and sensory data to learn first and second person pronouns
We present a method of grounded word learning that is powerful enough to learn the meanings of first and second person pronouns. The model uses the understood words in an utteran...
Kevin Gold, Brian Scassellati