Sciweavers

Share
VRST
2009
ACM

Gaze behavior and visual attention model when turning in virtual environments

10 years 10 months ago
Gaze behavior and visual attention model when turning in virtual environments
In this paper we analyze and try to predict the gaze behavior of users navigating in virtual environments. We focus on first-person navigation in virtual environments which involves forward and backward motions on a ground-surface with turns toward the left or right. We found that gaze behavior in virtual reality, with input devices like mice and keyboards, is similar to the one observed in real life. Participants anticipated turns as in real life conditions, i.e. when they can actually move their body and head. We also found influences of visual occlusions and optic flow similar to the ones reported in existing literature on real navigations. Then, we propose three simple gaze prediction models taking as input: (1) the motion of the user as given by the rotation velocity of the camera on the yaw axis (considered here as the virtual heading direction), and/or (2) the optic flow on screen. These models were tested with data collected in various virtual environments. Results show th...
Sébastien Hillaire, Anatole Lécuyer,
Added 28 May 2010
Updated 28 May 2010
Type Conference
Year 2009
Where VRST
Authors Sébastien Hillaire, Anatole Lécuyer, Gaspard Breton, Tony Regia-Corte
Comments (0)
books