Unsupervised domain adaptation aims to train a model from the labeled source domain to make predictions on the unlabeled target domain when the data distribution of the two domains is different. As a result, it needs to reduce the data distribution difference between the two domains to improve the model's generalization ability. Existing methods tend to align the two domains directly at the domain-level, or perform class-level domain alignment based on deep feature. The former ignores the relationship between the various classes in the two domains, which may cause serious negative transfer, the latter alleviates it by introducing pseudo-labels of the target domain, but it does not consider the importance of performing class-level alignment on shallow feature representations. In this paper, we develop this work on the method of class-level alignment. The proposed method reduces the difference between two domains dramaticlly by aligning multi-level features. In the case that the two domains share the label space, the class-level alignment is implemented by introducing Multi-Level Feature Contrastive Networks (MLFCNet). In practice, since the categories of samples in target domain are unavailable, we iteratively use clustering algorithm to obtain the pseudo-labels, and then minimize Multi-Level Contrastive Discrepancy (MLCD) loss to achieve more accurate class-level alignment. Experiments on three real-world benchmarks ImageCLEF-DA, Office-31 and Office-Home demonstrate that MLFCNet compares favorably against the existing state-of-the-art domain adaptation methods.
翻译:不受监督的域适应旨在从标签源域域培训一个模型,以便在两个域的数据分布不同时对未标记的目标域作出预测,从而在这两个域的数据分布不同时对未标记的目标域作出预测。 因此,它需要减少两个域的数据分布差异,以提高模型的概括性能力。 现有方法倾向于在域一级直接对这两个域进行对齐,或根据深度特征进行等级一级域对齐。 前一种方法忽略了两个域中不同类别之间的关系,这可能导致严重的负转移,后一种则通过引入目标域的假标签来减轻这一关系,但后一种则不考虑在浅地貌表示方面进行等级一级的类调整的重要性。 因此,我们在本文件中,我们开发了这一关于类级调整方法的工作。 拟议的方法通过调整多级特点,大大缩小两个域之间的差异。 在两个域共享标签空间的情况下,等级一级的调整是通过引入多级功能性对比网络(MLFCNet)来实施。 实际上,因为目标域内的样品类别是无法获得的,因此我们反复使用CR-G-R-G-CFAS-L-LML-ML-ML-ML-MLD-S-S-S-S-D-S-I-ILD-ID-D-C-C-ILD-ILD-D-C-D-ILD-S-S-S-S-S-S-S-S-S-S-S-D-IAR-S-D-IAR-IAR-IAR-D-S-S-S-S-D-D-D-ILD-D-D-S-IG-D-S-D-D-D-D-D-D-D-D-S-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-IL-IL-D-D-IL-IG-IAR-IAR-IAR-CL-D-IL-D-D-IL-D-D-D-D-D-I-D-D-D-D-D-D-D-I-I-I-D-D-D-D-D-I-I-