Prior feature transformation based approaches to Unsupervised Domain Adaptation (UDA) employ the deep features extracted by pre-trained deep models without fine-tuning them on the specific source or target domain data for a particular domain adaptation task. In contrast, end-to-end learning based approaches optimise the pre-trained backbones and the customised adaptation modules simultaneously to learn domain-invariant features for UDA. In this work, we explore the potential of combining fine-tuned features and feature transformation based UDA methods for improved domain adaptation performance. Specifically, we integrate the prevalent progressive pseudo-labelling techniques into the fine-tuning framework to extract fine-tuned features which are subsequently used in a state-of-the-art feature transformation based domain adaptation method SPL (Selective Pseudo-Labeling). Thorough experiments with multiple deep models including ResNet-50/101 and DeiT-small/base are conducted to demonstrate the combination of fine-tuned features and SPL can achieve state-of-the-art performance on several benchmark datasets.
翻译:先前基于地貌变化的方法(UDA)采用经过预先训练的深层模型所提取的深层特征,而无需对特定领域适应任务的具体源或目标域数据进行微调。相比之下,基于端到端学习的方法优化了预先训练的骨干和定制的适应模块,同时学习UDA的域变量特征。在这项工作中,我们探索了将微调特点和基于地貌变化的方法结合起来的可能性,以改进域适应性能。具体地说,我们将流行的渐进式伪标签技术纳入微调框架,以提取基于最新地貌变化的地貌适应方法SPL(选用普瑟多-Labeling)随后使用的微调特征。我们用多个深层模型(包括ResNet-50/101和DeiT-小型/基础)进行牵线实验,以展示精细调整的特征和SPL能够在若干基准数据集上实现最先进的性能。