Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation. However, this assumption is often infeasible owing to confidentiality issues or memory constraints on mobile devices. To address these problems, we propose a simple yet effective source-free UDA method that uses only a pre-trained source model and unlabeled target images. Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives. The feature generator is encouraged to learn consistent visual features away from the decision boundaries of the head classifier. Inspired by self-supervised learning, our method promotes inter-space alignment between the prediction space and the feature space while incorporating intra-space consistency within the feature space to reduce the domain gap between the source and target domains. We also consider epistemic uncertainty to boost the model adaptation performance. Extensive experiments on popular UDA benchmarks demonstrate that the performance of our approach is comparable or even superior to vanilla UDA methods without using source images or network modifications.
翻译:最不受监督的域适应(UDA)方法假定在模型适应期间有标签源图像。然而,由于移动设备存在保密问题或内存限制,这一假设往往不可行。为了解决这些问题,我们建议一种简单而有效的无源UDA方法,这种方法只使用预先培训的源模型和未贴标签的目标图像。我们的方法通过纳入数据增强并用两个一致性目标对地貌生成器进行培训来捕捉疏松的不确定性。特性生成器鼓励在主分类器的决定界限之外学习一致的视觉特征。在自我监督学习的启发下,我们的方法促进预测空间与特征空间之间的空间协调,同时将空间内部的一致性纳入地貌空间以缩小源和目标区域之间的域间差距。我们还考虑到成象不确定性,以提升模型的适应性能。广博的UDA基准实验表明,我们方法的性能与香草UDA方法相似,甚至优于不使用源图像或网络修改。