Sciweavers

76 search results - page 3 / 16
» Predicting Subjectivity in Multimodal Conversations
Sort
View
COLING
2010
13 years 1 months ago
Latent Mixture of Discriminative Experts for Multimodal Prediction Modeling
During face-to-face conversation, people naturally integrate speech, gestures and higher level language interpretations to predict the right time to start talking or to give backc...
Derya Ozkan, Kenji Sagae, Louis-Philippe Morency
ATAL
2008
Springer
13 years 8 months ago
The design of a generic framework for integrating ECA components
Embodied Conversational Agents (ECAs) are life-like computer generated characters that interact with human users in face-to-face multi-modal conversations. ECA systems are general...
Hung-Hsuan Huang, Toyoaki Nishida, Aleksandra Cere...
COST
2007
Springer
124views Multimedia» more  COST 2007»
14 years 10 days ago
Mutually Coordinated Anticipatory Multimodal Interaction
We introduce our research on anticipatory and coordinated interaction between a virtual human and a human partner. Rather than adhering to the turn taking paradigm, we choose to in...
Anton Nijholt, Dennis Reidsma, Herwin van Welberge...
ICMI
2004
Springer
116views Biometrics» more  ICMI 2004»
13 years 11 months ago
Towards integrated microplanning of language and iconic gesture for multimodal output
When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, ...
Stefan Kopp, Paul Tepper, Justine Cassell
IVA
2010
Springer
13 years 4 months ago
Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners
This paper focuses on dimensional prediction of emotions from spontaneous conversational head gestures. It maps the amount and direction of head motion, and occurrences of head nod...
Hatice Gunes, Maja Pantic