Deep learning models are challenged by the distribution shift between the training data and test data. Recently, the large models pre-trained on diverse data demonstrate unprecedented robustness to various distribution shifts. However, fine-tuning on these models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness. Existing methods for tackling this trade-off do not explicitly address the OOD robustness problem. In this paper, based on causal analysis on the aforementioned problems, we propose a novel fine-tuning method, which use masked images as counterfactual samples that help improving the robustness of the fine-tuning model. Specifically, we mask either the semantics-related or semantics-unrelated patches of the images based on class activation map to break the spurious correlation, and refill the masked patches with patches from other images. The resulting counterfactual samples are used in feature-based distillation with the pre-trained model. Extensive experiments verify that regularizing the fine-tuning with the proposed masked images can achieve a better trade-off between ID and OOD performance, surpassing previous methods on the OOD performance. Our code will be publicly available.
翻译:深层学习模式受到培训数据与测试数据之间分布变化的挑战。最近,在各种数据方面经过预先培训的大型模型显示了各种分布变化的空前强健性。然而,对这些模型的微调可能导致在分布(ID)性能与分配(OOOD)性能之间的权衡。处理这种权衡的现有方法没有明确解决OOOD稳健性问题。在本文件中,根据对上述问题的因果分析,我们提出了一个新的微调方法,使用蒙面图像作为反事实样本,帮助改进微调模型的稳健性能。具体地说,我们掩盖基于班级激活图的语义相关或语义无关的图像补丁,以打破虚伪的相互关系,用其他图像的补丁补丁补丁补丁补丁。由此产生的反事实样本用于基于特征的蒸馏,与预先培训的模型。广泛的实验证实,与拟议掩面图像的微调正统性能可以在ID和OOOD性能上实现更好的交易。</s>