Sciweavers

AFRIGRAPH
2001
ACM

A gesture processing framework for multimodal interaction in virtual reality

13 years 8 months ago
A gesture processing framework for multimodal interaction in virtual reality
This article presents a gesture detection and analysis framework for modelling multimodal interactions. It is particulary designed for its use in Virtual Reality (VR) applications and contains an abstraction layer for different sensor hardware. Using the framework, gestures are described by their characteristic spatio-temporal features which are on the lowest level calculated by simple predefined detector modules or nodes. These nodes can be connected by a data routing mechanism to perform more elaborate evaluation functions, therewith establishing complex detector nets. Typical problems that arise from the time-dependent invalidation of multimodal utterances under immersive conditions lead to the development of pre-evaluation concepts that as well support their integration into scene graph based systems to support traversal-type access. Examples of realized interactions illustrate applications which make use of the described concepts. CR Categories: D.2.2 [Software Engineering]: Desi...
Marc Erich Latoschik
Added 23 Aug 2010
Updated 23 Aug 2010
Type Conference
Year 2001
Where AFRIGRAPH
Authors Marc Erich Latoschik
Comments (0)