In few-shot unsupervised domain adaptation (FS-UDA), most existing methods followed the few-shot learning (FSL) methods to leverage the low-level local features (learned from conventional convolutional models, e.g., ResNet) for classification. However, the goal of FS-UDA and FSL are relevant yet distinct, since FS-UDA aims to classify the samples in target domain rather than source domain. We found that the local features are insufficient to FS-UDA, which could introduce noise or bias against classification, and not be used to effectively align the domains. To address the above issues, we aim to refine the local features to be more discriminative and relevant to classification. Thus, we propose a novel task-specific semantic feature learning method (TSECS) for FS-UDA. TSECS learns high-level semantic features for image-to-class similarity measurement. Based on the high-level features, we design a cross-domain self-training strategy to leverage the few labeled samples in source domain to build the classifier in target domain. In addition, we minimize the KL divergence of the high-level feature distributions between source and target domains to shorten the distance of the samples between the two domains. Extensive experiments on DomainNet show that the proposed method significantly outperforms SOTA methods in FS-UDA by a large margin (i.e., 10%).
翻译:在少数未受监督的域适应(FS-UDA)中,大多数现有方法都遵循了微小的学习方法(FSL)来利用低层次地方特征(从传统的革命模型(例如ResNet)中获取)进行分类,然而,FS-UDA和FSL的目标虽然相关,但又不同,因为FS-UDA的目的是将样本分类在目标域而不是源域。我们发现,FS-UDA的本地特征不足以满足FS-UDA的本地特征,因为FS-UDA可能会对分类产生噪音或偏见,而且不会被用于有效地对域进行统一。为了解决上述问题,我们力求改进当地特征,使之更具歧视性,与分类相关。因此,我们提议为FS-UDA提出一种新的特定任务语义特征学习方法(TCS),因为FS-UA的目标是将高层次的语系特征特征分类分为10个域域域域域,我们通过在高层次的域域域域间设计一种交叉的自我培训战略来利用少数标签样本在目标域内建立分类。此外,我们尽可能缩小了SFSFSWA的域域域域域域范围在高层次上分配。