Feature extraction plays an important role in visual localization. Unreliable features on dynamic objects or repetitive regions will interfere with feature matching and challenge indoor localization greatly. To address the problem, we propose a novel network, RaP-Net, to simultaneously predict region-wise invariability and point-wise reliability, and then extract features by considering both of them. We also introduce a new dataset, named OpenLORIS-Location, to train the proposed network. The dataset contains 1553 images from 93 indoor locations. Various appearance changes between images of the same location are included and can help the model to learn the invariability in typical indoor scenes. Experimental results show that the proposed RaP-Net trained with OpenLORIS-Location dataset achieves excellent performance in the feature matching task and significantly outperforms state-of-the-arts feature algorithms in indoor localization. The RaP-Net code and dataset are available at https://github.com/ivipsourcecode/RaP-Net.
翻译:在视觉本地化中,地物提取具有重要作用。 动态物体或重复区域上不可信任的特征将干扰特征匹配,并极大地挑战室内本地化。 为了解决这个问题,我们提议建立一个新颖的网络RaP-Net, 以同时预测区域性易变性和点性可靠性, 然后通过考虑这两个数据来提取特征。 我们还引入了一个新的数据集,名为OpenLoris-Location, 以培训拟议的网络。 该数据集包含来自93个室内地点的1553张图像。 包含同一位置图像之间的各种外观变化,有助于模型在典型的室内场景中了解不可变性。 实验结果显示,通过OpenLoris-Location数据集培训的拟议RaP-Net在功能匹配任务中取得优异性,大大优异于室内本地化的艺术特征算法。 RaP-Net 代码和数据集可在https://github.com/ivippoorcode/RaP-Net上查阅。