Sciweavers

Share
ISWC
2000
IEEE

Finding Location Using Omnidirectional Video on a Wearable Computing Platform

11 years 9 months ago
Finding Location Using Omnidirectional Video on a Wearable Computing Platform
In this paper we present a framework for a navigation system in an indoor environment using only omnidirectional video. Within a Bayesian framework we seek the appropriate place and image from the training data to describe what we currently see and infer a location. The posterior distribution over the state space conditioned on image similarity is typically not Gaussian. The distribution is represented using sampling and the location is predicted and verified over time using the Condensation algorithm. The system does not require complicated feature detection, but uses a simple metric between two images. Even with low resolution input, the system may achieve accurate results with respect to the training data when given favourable initial conditions.
Wasinee Rungsarityotin, Thad Starner
Added 31 Jul 2010
Updated 31 Jul 2010
Type Conference
Year 2000
Where ISWC
Authors Wasinee Rungsarityotin, Thad Starner
Comments (0)
books