Multimodal expressive embodied conversational agents

13 years 11 months ago
Multimodal expressive embodied conversational agents
In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchronized with speech. We are using the taxonomy of communicative functions developed by Isabella Poggi [22] to specify the behavior of the agent. Based on this taxonomy a representation language, Affective Presentation Markup Language, APML has been defined to drive the animation of the agent [4]. Lately, we have been working on creating no longer a generic agent but an agent with individual characteristics. We have been concentrated on the behavior specification for an individual agent. In particular we have defined a set of parameters to change the expressivity of the agent’s behaviors. Six parameters have been defined and implemented to encode gesture and face expressivity. We have performed perceptual studies of our expressivity model. General Terms Algorithms Categories and Subject Descriptors H.5.2 ...
Catherine Pelachaud
Added 26 Jun 2010
Updated 26 Jun 2010
Type Conference
Year 2005
Where MM
Authors Catherine Pelachaud
Comments (0)