Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models. However, the application of well-known UDA approaches does not generalize well in Semi-Supervised Domain Adaptation (SSDA) scenarios where few labeled samples from the target domain are available. In this paper, we propose a simple Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap between the labeled and unlabeled target distributions and inter-domain gap between source and unlabeled target distribution in SSDA. We suggest employing class-wise contrastive learning to reduce the inter-domain gap and instance-level contrastive alignment between the original (input image) and strongly augmented unlabeled target images to minimize the intra-domain discrepancy. We have shown empirically that both of these modules complement each other to achieve superior performance. Experiments on three well-known domain adaptation benchmark datasets namely DomainNet, Office-Home, and Office31 demonstrate the effectiveness of our approach. CLDA achieves state-of-the-art results on all the above datasets.
翻译:无人监督的域域适应(DUDA)旨在将标签源的分布与未贴标签的目标分布相匹配,以获得域变量预测模型。然而,众所周知的 UDA 方法的应用在半超版域适应(SSDA)情景中并不十分普遍,因为从目标域中几乎没有贴标签的样本。在本文件中,我们建议为半监管的域适应(CLDA)提供一个简单的对比学习框架,以试图缩小标签和未贴标签的目标分布之间的域内差距,以及SDA源和未贴标签的目标分布之间的域间差距。我们建议使用类比对式学习,以减少原始(投入图像)之间的域间差距和实例级对比一致,并大力扩大无标签的目标图像,以尽量减少内部差异。我们的经验显示,这两个模块相互互补,以取得优异性业绩。对三个已知域适应基准数据集,即DomainNet、OfficeHome-Home和Office31进行了实验。我们建议采用类反比对照学习,以缩小原始(imal-als)所有数据结果的状态。