Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network(CNN)-based approaches for semantic segmentation heavily rely on the pixel-level annotated data, which is labor-intensive. However, existing UDA approaches in this regard inevitably require the full access to source datasets to reduce the gap between the source and target domains during model adaptation, which are impractical in the real scenarios where the source datasets are private, and thus cannot be released along with the well-trained source models. To cope with this issue, we propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation. SFDA not only enables to recover and preserve the source domain knowledge from the source model via knowledge transfer during model adaptation, but also distills valuable information from the target domain for self-supervised learning. The pixel- and patch-level optimization objectives tailored for semantic segmentation are seamlessly integrated in the framework. The extensive experimental results on numerous benchmark datasets highlight the effectiveness of our framework against the existing UDA approaches relying on source data.
翻译:未经监督的域适应(UDA)能够应对以进化神经网络(CNN)为基础的语义分割法严重依赖劳动密集型像素级附加说明数据的挑战。然而,这方面的现有UDA方法不可避免地需要充分利用源数据集,以缩小模型适应期间源和目标领域之间的差距,而在源数据集是私有的、因此无法与经过良好训练的源模型一起释放的真正假设中,这些假设是不切实际的,在源数据集是私有的,因此无法与经过良好训练的源模型一起释放。为了应对这一问题,我们建议为语义分割提出一个无源域适应框架,即SFDA,其中只有经过良好训练的源模型和未贴标签的目标域数据集可供适应使用。SFDA不仅能够通过模型适应期间的知识转让从源模型中恢复并保存源域知识,而且还能够从目标领域提取宝贵的信息,供自我监督的学习。为语义分割而定制的像素和补齐级优化目标在框架中是无缝合的。关于多种基准数据框架的广泛的实验结果显示我们现有基准数据源的有效性。