Vision Transformers (ViTs) have recently achieved competitive performance in broad vision tasks. Unfortunately, on popular threat models, naturally trained ViTs are shown to provide no more adversarial robustness than convolutional neural networks (CNNs). Adversarial training is still required for ViTs to defend against such adversarial attacks. In this paper, we provide the first and comprehensive study on the adversarial training recipe of ViTs via extensive evaluation of various training techniques across benchmark datasets. We find that pre-training and SGD optimizer are necessary for ViTs' adversarial training. Further considering ViT as a new type of model architecture, we investigate its adversarial robustness from the perspective of its unique architectural components. We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs. Our code is available at https://github.com/mo666666/When-Adversarial-Training-Meets-Vision-Transformers.
翻译:不幸的是,在流行的威胁模型上,经过自然培训的ViT显示,其对抗性强度不会比进化神经网络(CNNs)更强。 Vitrarial 培训仍需要Vitrarial培训来防范这种对抗性攻击。在本文件中,我们通过广泛评价各种基准数据集的培训技术,对ViT的对抗性培训配方进行首次和全面的研究。我们认为,ViTs的对抗性培训需要预先培训和SGD优化器。进一步将ViT视为新型的模型结构,我们从它独特的建筑组成部分的角度来调查其对抗性强度。我们发现,当在对抗性培训期间随机遮盖某些关注区或遮掩某些补缝隙的梯度时,ViTs的对抗性强度可以大大改进,这有可能打开探索ViTs等新设计的模型内建筑信息的一线。我们的代码可在https://github.com/mo6666-regregy-Arrif-Adrits.