We explore semantic correspondence estimation through the lens of unsupervised learning. We thoroughly evaluate several recently proposed unsupervised methods across multiple challenging datasets using a standardized evaluation protocol where we vary factors such as the backbone architecture, the pre-training strategy, and the pre-training and finetuning datasets. To better understand the failure modes of these methods, and in order to provide a clearer path for improvement, we provide a new diagnostic framework along with a new performance metric that is better suited to the semantic matching task. Finally, we introduce a new unsupervised correspondence approach which utilizes the strength of pre-trained features while encouraging better matches during training. This results in significantly better matching performance compared to current state-of-the-art methods.
翻译:我们通过不受监督的学习透镜来探索语义对应估计。 我们使用标准化的评估协议,在诸如主干结构、培训前战略以及培训前和微调数据集等不同的因素中,对多个挑战性数据集中最近提出的若干未经监督的方法进行了彻底评估。 为了更好地了解这些方法的失败模式,并且为了提供更明确的改进路径,我们提供了一个新的诊断框架,以及更适合语义匹配任务的新的业绩衡量标准。 最后,我们引入了一种新的未经监督的通信方法,利用预先培训的功能的力量,同时鼓励培训期间更好的匹配。这导致与目前最先进的方法相比,业绩更加匹配。