: The paper describes a map building module, where the image sequences of the omnidirectional camera are transformed into virtual top-view ones and melted into the global dynamic map. After learning the environment from training images, a current image is compared to the training set by appearance-based matching. Appropriate classification strategies yield an estimate of the robot's current position.