Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain. The conventional DA strategy is to align the feature distributions of the two domains. Recently, increasing researches have focused on self-training or other semi-supervised algorithms to explore the data structure of the target domain. However, the bulk of them depend largely on confident samples in order to build reliable pseudo labels, prototypes or cluster centers. Representing the target data structure in such a way would overlook the huge low-confidence samples, resulting in sub-optimal transferability that is biased towards the samples similar to the source domain. To overcome this issue, we propose a novel contrastive learning method by processing low-confidence samples, which encourages the model to make use of the target data structure through the instance discrimination process. To be specific, we create positive and negative pairs only using low-confidence samples, and then re-represent the original features with the classifier weights rather than directly utilizing them, which can better encode the task-specific semantic information. Furthermore, we combine cross-domain mixup to augment the proposed contrastive loss. Consequently, the domain gap can be well bridged through contrastive learning of intermediate representations across domains. We evaluate the proposed method in both unsupervised and semi-supervised DA settings, and extensive experimental results on benchmarks reveal that our method is effective and achieves state-of-the-art performance. The code can be found in https://github.com/zhyx12/MixLRCo.
翻译:域适应 (DA) 旨在将知识从标签丰富源域向相关但标签偏差目标域转移。 常规 DA 战略是调整这两个域的特征分布。 最近, 越来越多的研究侧重于自我培训或其他半监督的算法, 以探索目标域的数据结构。 但是, 大部分研究主要依赖于自信的样本, 以便建立可靠的假标签、 原型或集束中心。 以这种方式代表目标数据结构将忽略巨大的低信任样本, 导致偏向于与源域相似的样本的亚最佳可转移性。 为了克服这一问题, 我们提出一种新的对比学习方法, 通过处理低信任样本,鼓励模型通过实例歧视过程利用目标数据结构。 具体地说, 我们创造正对正对和负对, 以建立可靠的伪标签、 原样, 而不是直接使用它们来展示原始的特性, 从而可以更好地识别特定任务差异信息。 此外, 我们将跨多层次混合的学习方法, 在跨区域域的跨级缩略图中, 我们的拟议的缩略图的缩缩缩缩缩缩缩缩缩图, 方法 。