Sciweavers

Share
COST
2008
Springer

Multimodal Human Machine Interactions in Virtual and Augmented Reality

11 years 8 months ago
Multimodal Human Machine Interactions in Virtual and Augmented Reality
Virtual worlds are developing rapidly over the internet. They are visited by avatars and staffed with Embodied Conversational Agents (ECAs). An avatar is a representation of a physical person. Each person controls one or several avatars and usually receives feedback from the virtual world on an audio-visual display. Ideally, all senses should be used to feel fully embedded in a virtual world. Sound, vision and sometimes touch are the available modalities. This paper reviews the technological developments which enable audio-visual interactions in virtual and augmented reality worlds. Emphasis is placed on the speech and gesture interfaces, talking face analysis and synthesis. Key words: HMI, Multimodality, Speech, Face, Gesture, Virtual words
Gérard Chollet, Anna Esposito, Annie Gentes
Added 18 Oct 2010
Updated 18 Oct 2010
Type Conference
Year 2008
Where COST
Authors Gérard Chollet, Anna Esposito, Annie Gentes, Patrick Horain, Walid Karam, Zhenbo Li, Catherine Pelachaud, Patrick Perrot, Dijana Petrovska-Delacrétaz, Dianle Zhou, Leila Zouari
Comments (0)
books