Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation. However, this assumption is often infeasible owing to confidentiality issues or memory constraints on mobile devices. Some recently developed approaches do not require source images during adaptation, but they show limited performance on perturbed images. To address these problems, we propose a novel source-free UDA method that uses only a pre-trained source model and unlabeled target images. Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives. The feature generator is encouraged to learn consistent visual features away from the decision boundaries of the head classifier. Thus, the adapted model becomes more robust to image perturbations. Inspired by self-supervised learning, our method promotes inter-space alignment between the prediction space and the feature space while incorporating intra-space consistency within the feature space to reduce the domain gap between the source and target domains. We also consider epistemic uncertainty to boost the model adaptation performance. Extensive experiments on popular UDA benchmark datasets demonstrate that the proposed source-free method is comparable or even superior to vanilla UDA methods. Moreover, the adapted models show more robust results when input images are perturbed.
翻译:最不受监督的域适应(UDA)方法假定在模型适应期间有标签源图像。然而,这一假设往往不可行,因为保密问题或移动设备内存的限制。一些最近开发的方法在适应期间不需要源图像,但在受扰动的图像上表现有限。为了解决这些问题,我们建议一种新型的无源UDA方法,这种方法仅使用预先训练的源模型和未标记的目标图像。我们的方法通过纳入数据增强和以两个一致性目标训练特性生成器来捕捉偏执的不确定性。鼓励特征生成器从头分类器的决定边界外学习一致的视觉特征。因此,经过调整的模型对图像扰动效果更加强大。在自我监督学习的启发下,我们的方法促进预测空间和特征空间之间的空间协调,同时将空间内部一致性纳入特性空间以缩小源和目标区域之间的域间差距。我们还考虑通过引入数据增强模型适应性能的缩略图不确定性。关于UDA基准数据集的广泛实验表明,拟议的无源方法在经过自我调整的模型显示每张动的图像结果时,其结果更具有可比性。</s>