In this paper we propose a framework for integrating map-based relocalization into online direct visual odometry. To achieve map-based relocalization for direct methods, we integrate image features into Direct Sparse Odometry (DSO) and rely on feature matching to associate online visual odometry (VO) with a previously built map. The integration of the relocalization poses is threefold. Firstly, they are treated as pose priors and tightly integrated into the direct image alignment of the front-end tracking. Secondly, they are also tightly integrated into the back-end bundle adjustment. An online fusion module is further proposed to combine relative VO poses and global relocalization poses in a pose graph to estimate keyframe-wise smooth and globally accurate poses. We evaluate our method on two multi-weather datasets showing the benefits of integrating different handcrafted and learned features and demonstrating promising improvements on camera tracking accuracy.
翻译:在本文中,我们提出了一个将基于地图的重新定位纳入在线直观视觉测量的框架。为了实现基于地图的直接方法重新定位,我们将图像特征纳入直接偏差的奥多度测量(DSO),并依靠特征匹配将在线视觉测量(VO)与先前建造的地图联系起来。重新定位的组合有三重。首先,它们被视为构成前奏,并紧密地融入前端跟踪的直接图像匹配中。第二,它们还被紧密地融入后端捆绑调整中。还提议建立一个在线聚合模块,将相对的VO配置和全球重新定位组合结合到一个组合图示中,以估计关键框架的平稳和全球准确配置。我们评估了两个多天气数据集的方法,显示整合不同手工艺和学习的特征的好处,并展示了摄影机跟踪准确性方面的有希望的改进。