Most image matching methods perform poorly when encountering large scale changes in images. To solve this problem, firstly, we propose a scale-difference-aware image matching method (SDAIM) that reduces image scale differences before local feature extraction, via resizing both images of an image pair according to an estimated scale ratio. Secondly, in order to accurately estimate the scale ratio, we propose a covisibility-attention-reinforced matching module (CVARM) and then design a novel neural network, termed as Scale-Net, based on CVARM. The proposed CVARM can lay more stress on covisible areas within the image pair and suppress the distraction from those areas visible in only one image. Quantitative and qualitative experiments confirm that the proposed Scale-Net has higher scale ratio estimation accuracy and much better generalization ability compared with all the existing scale ratio estimation methods. Further experiments on image matching and relative pose estimation tasks demonstrate that our SDAIM and Scale-Net are able to greatly boost the performance of representative local features and state-of-the-art local feature matching methods.
翻译:在遇到图像大规模变化时,大多数图像匹配方法都表现不佳。 首先,为了解决这个问题,我们提出一个规模差异图像匹配方法(SDAIM),通过根据估计比例比例对图像配对的图像根据估计比例对两个图像进行重新配置,在本地地貌提取之前减少图像比例差异。 其次,为了准确估计比例比,我们提议了一个共同可见-保护-增强匹配模块(CVARM),然后在CVARM的基础上设计一个新的神经网络,称为“规模网络”。 拟议的CVARM能够对图像配对中的可可见区域施加更大的压力,并抑制从仅以一个图像可见的地区转移注意力。 定量和定性实验证实,拟议的规模网络比现有所有比例比估计方法的准确性要大得多。 进一步的图像匹配实验和相对的显示,我们的SDAMIM和规模网络能够大大提升具有代表性的地方特征和最先进的地方特征匹配方法的性能。