Effort in releasing large-scale datasets may be compromised by privacy and intellectual property considerations. A feasible alternative is to release pre-trained models instead. While these models are strong on their original task (source domain), their performance might degrade significantly when deployed directly in a new environment (target domain), which might not contain labels for training under realistic settings. Domain adaptation (DA) is a known solution to the domain gap problem, but usually requires labeled source data. In this paper, we study the problem of source free domain adaptation (SFDA), whose distinctive feature is that the source domain only provides a pre-trained model, but no source data. Being source free adds significant challenges to DA, especially when considering that the target dataset is unlabeled. To solve the SFDA problem, we propose an image translation approach that transfers the style of target images to that of unseen source images. To this end, we align the batch-wise feature statistics of generated images to that stored in batch normalization layers of the pre-trained model. Compared with directly classifying target images, higher accuracy is obtained with these style transferred images using the pre-trained model. On several image classification datasets, we show that the above-mentioned improvements are consistent and statistically significant.
翻译:发布大型数据集的努力可能会受到隐私和知识产权考虑的损害。 一个可行的替代办法是释放预先培训的模型。 虽然这些模型在原始任务(源域)上表现很强, 但是当这些模型在新环境( 目标域) 直接部署时,其性能可能会显著下降, 新环境( 目标域) 可能不包含培训标签, 在现实环境中, 这些新环境( 目标域) 可能不会包含培训标签 。 域适应( DA) 是已知的域间差距问题的解决方案, 但通常需要标签源源数据 。 在本文件中, 我们研究源域自由调整( SFDA) 的问题, 其特征是源域仅提供预先培训的模型, 但没有源数据。 成为免费源对 DA 带来重大挑战。 特别是考虑到目标数据集未加标签时, 当目标数据集被直接部署时, 我们建议一种图像转换方法, 将目标图像的样式转换到未见源图像图像的分级标准层 。 与直接分类的图像比较, 与直接分类相比, 以这些样式传输的图像获得了更高的准确性, 使用预先培训的模型 。 在几个图像分类中, 我们显示重大的改进 。