Keypoints matching is a pivotal component for many image-relevant applications such as image stitching, visual simultaneous localization and mapping (SLAM), and so on. Both handcrafted-based and recently emerged deep learning-based keypoints matching methods merely rely on keypoints and local features, while losing sight of other available sensors such as inertial measurement unit (IMU) in the above applications. In this paper, we demonstrate that the motion estimation from IMU integration can be used to exploit the spatial distribution prior of keypoints between images. To this end, a probabilistic perspective of attention formulation is proposed to integrate the spatial distribution prior into the attentional graph neural network naturally. With the assistance of spatial distribution prior, the effort of the network for modeling the hidden features can be reduced. Furthermore, we present a projection loss for the proposed keypoints matching network, which gives a smooth edge between matching and un-matching keypoints. Image matching experiments on visual SLAM datasets indicate the effectiveness and efficiency of the presented method.
翻译:关键点匹配是许多图像相关应用的关键组成部分, 如图像缝合、 视觉同步本地化和绘图( SLAM) 等。 手工制作的和最近出现的深层次学习基点匹配方法仅依赖于关键点和本地特征, 而忽略了上述应用中惯性测量单位( IMU) 等其他可用传感器。 在本文中, 我们证明IMU集成的动作估计可用于利用图像间关键点之前的空间分布。 为此, 提出了一个关注配方的概率性角度, 将关注前的空间分布自然地整合到注意图神经网络中。 在之前的空间分布帮助下, 隐藏特征建模网络的工作可以减少 。 此外, 我们对拟议的关键点匹配网络进行预测损失, 从而在匹配和不匹配关键点之间实现平稳的边缘。 视觉 SLM 数据集的图像匹配实验显示了所呈现的方法的有效性和效率 。