Visual place recognition techniques based on deep learning, which have imposed themselves as the state-of-the-art in recent years, do not generalize well to environments visually different from the training set. Thus, to achieve top performance, it is sometimes necessary to fine-tune the networks to the target environment. To this end, we propose a self-supervised domain calibration procedure based on robust pose graph optimization from Simultaneous Localization and Mapping (SLAM) as the supervision signal without requiring GPS or manual labeling. Moreover, we leverage the procedure to improve uncertainty estimation for place recognition matches which is important in safety critical applications. We show that our approach can improve the performance of a state-of-the-art technique on a target environment dissimilar from its training set and that we can obtain uncertainty estimates. We believe that this approach will help practitioners to deploy robust place recognition solutions in real-world applications. Our code is available publicly: https://github.com/MISTLab/vpr-calibration-and-uncertainty
翻译:基于深层学习的视觉识别技术是近些年来作为最新技术而强加给自己,但并不全面推广到与成套培训相异的环境,因此,为了达到最高性能,有时有必要将网络微调到目标环境。为此,我们提议采用一个以同步本地化和绘图(SLAM)的强力图形优化为基础的自我监督域校准程序作为监督信号,而不需要全球定位系统或手动标签。此外,我们利用这一程序改进对安全关键应用中重要的地点识别匹配的不确定性估计。我们表明,我们的方法可以改进目标环境中不同于其成套培训环境的最新技术的性能,我们可以获得不确定性估计。我们认为,这一方法将有助于从业者在现实世界应用中部署稳健的定位解决方案。我们的代码可以公开查阅:https://github.com/MISTLab/vpr-caliculation-and-uncertainty。</s>