Sciweavers

Share
ICAT
2006
IEEE

An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

12 years 20 days ago
An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures
This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces.
Sylvia Irawati, Scott Green, Mark Billinghurst, An
Added 11 Jun 2010
Updated 11 Jun 2010
Type Conference
Year 2006
Where ICAT
Authors Sylvia Irawati, Scott Green, Mark Billinghurst, Andreas Dünser, Heedong Ko
Comments (0)
books