We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Cost aggregation is a highly important process in matching tasks, which the matching accuracy depends on the quality of its output. Compared to hand-crafted or CNN-based methods addressing the cost aggregation, in that either lacks robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to fully leverage self-attention mechanism. Specifically, we include appearance affinity modeling to aid the cost aggregation process in order to disambiguate the noisy initial correlation maps and propose multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. We then combine with swapping self-attention technique and residual connections not only to enforce consistent matching but also to ease the learning process, which we find that these result in an apparent performance boost. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models are available at~\url{https://github.com/SunghwanHong/CATs}.
翻译:我们提议建立一个新的成本汇总网络,称为成本聚合变异器(CATs),以寻找在语言上相似的图像之间的密集对应关系,并因大型阶级内外观和几何差异而带来更多的挑战。成本汇总是匹配任务的一个非常重要的过程,匹配的准确性取决于其产出的质量。与处理成本汇总的手工制作或CNN方法相比,即对于严重变形缺乏强力,或者对严重变形没有区分不正确匹配的CNN的局限性,或者对由于有限可接受字段而未能区分不正确匹配的CNN的局限,CATs探索初始相关地图之间的全球共识,借助一些建筑设计,使我们能够充分利用自我注意机制。 具体地说,我们包含一种近似模型模型,以帮助成本汇总进程,从而消除初始热度相关地图的模糊性,并提议多层次汇总,以便有效捕捉到与等级特征描述不同的语义。我们随后与自留技术和残余连接的互换不仅能够执行一致的匹配,而且能够方便学习过程,我们发现这些结果明显地促进业绩。我们进行实验,以模拟方式展示拟议的模型/Squredustram/CAT的最新方法的有效性。