Sciweavers

Share
IJRR
2011

Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration

8 years 3 months ago
Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration
Visual and inertial sensors, in combination, are able to provide accurate motion estimates and are well-suited for use in many robot navigation tasks. However, correct data fusion, and hence overall performance, depends on careful calibration of the rigid body transform between the sensors. Obtaining this calibration information is typically difficult and time-consuming, and normally requires additional equipment. In this paper we describe an algorithm, based on the unscented Kalman filter, for self-calibration of the transform between a camera and an inertial measurement unit (IMU). Our formulation rests on a differential geometric analysis of the observability of the camera-IMU system; this analysis shows that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can be recovered from camera and IMU measurements alone. While calibrating the transform we simultaneously localize the IMU and build a map of...
Jonathan Kelly, Gaurav S. Sukhatme
Added 14 May 2011
Updated 14 May 2011
Type Journal
Year 2011
Where IJRR
Authors Jonathan Kelly, Gaurav S. Sukhatme
Comments (0)
books