Local image feature matching, aiming to identify and correspond similar regions from image pairs, is an essential concept in computer vision. Most existing image matching approaches follow a one-to-one assignment principle and employ mutual nearest neighbor to guarantee unique correspondence between local features across images. However, images from different conditions may hold large-scale variations or viewpoint diversification so that one-to-one assignment may cause ambiguous or missing representations in dense matching. In this paper, we introduce AdaMatcher, a novel detector-free local feature matching method, which first correlates dense features by a lightweight feature interaction module and estimates co-visible area of the paired images, then performs a patch-level many-to-one assignment to predict match proposals, and finally refines them based on a one-to-one refinement module. Extensive experiments show that AdaMatcher outperforms solid baselines and achieves state-of-the-art results on many downstream tasks. Additionally, the many-to-one assignment and one-to-one refinement module can be used as a refinement network for other matching methods, such as SuperGlue, to boost their performance further. Code will be available upon publication.
翻译:本地图像匹配,旨在识别和对应图像配对的相似区域,是计算机愿景中的一个基本概念。大多数现有图像匹配方法遵循一对一分配原则,并使用相互最近的邻居来保证不同图像的当地特征之间的独特对应。然而,不同条件下的图像可能存在大规模变异或观点多样化,因此一对一分配可能会在密集匹配中造成模糊或缺失的表示。在本文中,我们引入了AdaMatcher,这是一种新型的无探测器本地特征匹配方法,首先通过轻重特征互动模块和对配对图像的共可见区域将密集特征联系起来,然后执行一个跨层次的多对一分配,以预测匹配建议,并最终根据一对一的完善模块对其进行完善。广泛的实验显示,AdaMatcher在多个下游任务上超越了固基线,并取得了最新的结果。此外,许多多对一的配置和一对一的改进模块可以用作其他匹配方法的改进网络,例如SuperGlue,以进一步提升其性能。