In this paper we propose a framework for integrating map-based relocalization into online direct visual odometry. To achieve map-based relocalization for direct methods, we integrate image features into Direct Sparse Odometry (DSO) and rely on feature matching to associate online visual odometry (VO) with a previously built map. The integration of the relocalization poses is threefold. Firstly, they are incorporated as pose priors in the direct image alignment of the front-end tracking. Secondly, they are tightly integrated into the back-end bundle adjustment. Thirdly, an online fusion module is further proposed to combine relative VO poses and global relocalization poses in a pose graph to estimate keyframe-wise smooth and globally accurate poses. We evaluate our method on two multi-weather datasets showing the benefits of integrating different handcrafted and learned features and demonstrating promising improvements on camera tracking accuracy.
翻译:在本文中,我们提出了一个将基于地图的重新定位纳入在线直观视觉测量的框架。为了实现基于地图的直接方法重新定位,我们将图像特征纳入直接偏差的奥多度测量(DSO)中,并依靠特征匹配将在线视觉测量(VO)与先前建造的地图联系起来。重新定位的组合是三重的。首先,它们作为前端跟踪直接图像对齐的前奏而被纳入其中。其次,它们被紧紧地融入后端捆绑调整中。第三,我们进一步建议建立一个在线聚合模块,将相对的VO成份和全球重新定位组合结合成一个组合图,以估计关键框架的平稳和全球准确配置。我们评估了两个多天候数据集的方法,显示整合不同手工艺和学习的特征的好处,并展示了对相机跟踪准确性的有希望的改进。