RGBD Relocalisation Using Pairwise Geometry and Concise Key Point Sets

Shuda Li and Andrew Calway

Since their release several years ago, low cost RGB+Depth sensors such as the Microsoft Kinect have found widespread use in vision applications. One area in which this is undoubtably true is in simultaneous localisation and mapping - the additional depth information has a significant impact on quality, scale and resolution of 3-D reconstructions that can be obtained compared with those obtained using RGB cameras. We have built one such system, which fuses depth maps to build high detail surface reconstructions of scenes and simultaneously tracks the 3-D pose of the RGB-D sensor, all in real-time. In this respect it is similar to other systems such as KinectFusion and its recent variants.

The difference with our work is that we have incorporated novel forms of representation and matching which allows very robust and very fast relocalisation - once a 3-D reconstruction (map) has been built, if we remove the sensor from the scene and bring it so that it observes the mapped part of the scene from an entirely different direction, then the system will near-instantaneously discover the new 3-D pose, allowing the continuation of tracking and mapping. Importantly and perhaps in contrast to other methods that have been reported, it can achieve relocalisation even when the sensor pose is a long way from those poses that were used to build the map. This latter property is extremely important to making it robust.

There are two novel contributions in this work. First, we use pairwise 3-D geometry of key points encoded within a graph type representation in oreder to do fast and effient matching of 3-D point sets. It is this that enables us to matching a current observed point set with the relevant portion of the global model (map) in order to localise the sensor pose. Matching is done using the relative positioning of pairs of points and key points are selected based on appearance saliency. In addition, we keep a carefully selected sparse key point representation of the scene, ensuring single key points lie within nodes of an octree representation. This minimises the memory footprint of the representation whilst ensuring reliable relocalisation.

We have evaluated the system and compared it with other methods in experiments on both stynthetic and real sequences. The results show that our system out performs the other methods in terms of the ability to relocalise successfully and we that it does this a lot faster than the other methods. It is difficult to demonstrate the performance without giving a live demo (please get in touch if you would like to visit) but hopefully the videos below give a reasonable idea.


Publications

RGBD Relocalisation Using Pairwise Geometry and Concise Key Point Sets, Shuda Li and Andrew Calway, ICRA 2015, open access pre-print.

[Source code]


Results


Demonstration of localisation and mapping


Demonstration of relocalisation - note the speed and the fact that the relocalised poses are far from the trajectory used to generate the map


Video submitted to ICRA 2016