HDRFusion: RGB-D SLAM with Auto Exposure
Shuda Li, Ankar Handa, Yang Zhang and Andrew Calway
The auto-exposure (AE) function on vision sensors is often a curse for vision algorithms - changing the appearance of scene content according to the current pose of the sensor and local lighting conditions, making it difficult to build reliable and invariant scene models. In this work we show how it is possible to exploit AE in order to build high dynamic range (HDR) scene reconstructions whilst at the same time maintiining reilable 3-D tracking in the face of changing appearance caused by the AE.
The key contribution is a matching function - a normalised form of correlation in exposure space derived via the inverse camera response function (CRF) - which under reasonable assumptions is independent of changes in exposure time and hence gives reliable model to frame matching. Appearance frames captured at different exposures are then fused to build an HDR reconstruction of the scene. A schematic of the complete alorithm is shown in the figure below. Results on synthetic and real data demonstrate that the method provides both improved tracking and maps with far greater dynamic range of luminosity.
Publications
HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor, Shuda Li, Ankur Handa, Yang Zhang and Andrew Calway, arXiv:1604.00895.
[Source code][Data]
Results