Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP's performance on new tasks. In this work, we identify and explore the problem of \emph{adapting large-scale models for zero-shot adversarial robustness}. We first identify two key factors during model adaption -- training losses and adaptation methods -- that affect the model's zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of over 31 points over ImageNet and 15 zero-shot datasets. We hope this work can shed light on understanding the zero-shot adversarial robustness of large-scale models.
翻译:诸如 CLIP 等经过预先训练的大型视觉语言模型已经对看不见的任务表现出了强烈的概括性。 然而,无法辨别的对抗性扰动可以显著降低 CLIP 在新任务方面的表现。 在这项工作中,我们发现并探索了 emph{ 适应大规模零射敌对立强度模型的问题。 我们发现,在模型调整过程中,视觉快速调整比较有效,同时微调现有的文本指导。总体而言,我们的方法大大改进了CLIP 的零射线对立强度。我们随后提出了文本引导的对抗性培训损失,将文本嵌入和对抗性对立性视觉特征与对一小套培训数据的对比性学习相匹配。我们将这种培训损失应用于两种调整方法,即模型微调和视觉快速调。我们发现,在缺少文本的情况下,视觉快速调整更为有效,同时微调现有文本指导的胜出。 总体而言,我们的方法大大改进了CLIP 的零射线对准性强度,我们看到超过图像网 31 点 和 15 零光谱数据集的平均改进 。 我们希望,这种对抗性模型能够很好地了解大规模对立性模型。