Sciweavers

IVA
2010
Springer

Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation

13 years 3 months ago
Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation
Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression. . . ) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.
Michael Kipp, Alexis Heloir, Marc Schröder, P
Added 28 Jan 2011
Updated 28 Jan 2011
Type Journal
Year 2010
Where IVA
Authors Michael Kipp, Alexis Heloir, Marc Schröder, Patrick Gebhard
Comments (0)