Pre-training has enabled state-of-the-art results on many tasks. In spite of its recognized contribution to generalization, we observed in this study that pre-training also transfers adversarial non-robustness from pre-trained model into fine-tuned model in the downstream tasks. Using image classification as an example, we first conducted experiments on various datasets and network backbones to uncover the adversarial non-robustness in fine-tuned model. Further analysis was conducted on examining the learned knowledge of fine-tuned model and standard model, and revealed that the reason leading to the non-robustness is the non-robust features transferred from pre-trained model. Finally, we analyzed the preference for feature learning of the pre-trained model, explored the factors influencing robustness, and introduced a simple robust pre-traning solution.
翻译:培训前使许多任务取得了最先进的成果。尽管我们认识到培训前对一般化的贡献,但我们在本研究报告中指出,培训前还将对抗性非野蛮性从培训前模式转变为下游任务中经过细微调整的模式。以图像分类为例,我们首先对各种数据集和网络主干进行了实验,以在经过微调的模型中发现对抗性非野蛮性。还进行了进一步分析,以审查经过精细调整的模式和标准模型的学术知识,并揭示导致非野蛮性的原因是从培训前模式中转移的非野蛮性特征。最后,我们分析了接受培训前模式的特点学习的偏好,探索了影响稳健性的因素,并引入了简单的、稳健的预演前解决方案。