Instance contrast for unsupervised representation learning has achieved great success in recent years. In this work, we explore the idea of instance contrastive learning in unsupervised domain adaptation (UDA) and propose a novel Category Contrast technique (CaCo) that introduces semantic priors on top of instance discrimination for visual UDA tasks. By considering instance contrastive learning as a dictionary look-up operation, we construct a semantics-aware dictionary with samples from both source and target domains where each target sample is assigned a (pseudo) category label based on the category priors of source samples. This allows category contrastive learning (between target queries and the category-level dictionary) for category-discriminative yet domain-invariant feature representations: samples of the same category (from either source or target domain) are pulled closer while those of different categories are pushed apart simultaneously. Extensive UDA experiments in multiple visual tasks ($e.g.$, segmentation, classification and detection) show that the simple implementation of CaCo achieves superior performance as compared with the highly-optimized state-of-the-art methods. Analytically and empirically, the experiments also demonstrate that CaCo is complementary to existing UDA methods and generalizable to other learning setups such as semi-supervised learning, unsupervised model adaptation, etc.
翻译:在这项工作中,我们探索了在未经监督的域适应(UDA)中进行对比性学习的想法,并提出了一个新型的分类对比技术(在目标查询和类别字典之间),该类对比技术(CaCo)在视觉的UDA任务中首先引入语义区分。通过将对比性学习作为一种字典外观操作来考虑,我们构建了一个语义词典词典,从源和目标域样本中指定了一个基于源样类样本的(假想)类标签,从源码样本和目标域样本中挑选出一个样本。这允许类对比性学习(在目标查询和类别字典之间)用于类别差异性但域内差异性特征展示:同一类的样本(来自源或目标域)被拉近,而不同类的样本被同时分离。我们通过多种视觉任务(例如美元、分解、分类和检测)的广泛 UDA 实验显示,与高度优化的状态模型模型和分类级级词典相比,Coco 的简单实施取得了优异的成绩。