Despite the success of vision transformers (ViTs), they still suffer from significant drops in accuracy in the presence of common corruptions, such as noise or blur. Interestingly, we observe that the attention mechanism of ViTs tends to rely on few important tokens, a phenomenon we call token overfocusing. More critically, these tokens are not robust to corruptions, often leading to highly diverging attention patterns. In this paper, we intend to alleviate this overfocusing issue and make attention more stable through two general techniques: First, our Token-aware Average Pooling (TAP) module encourages the local neighborhood of each token to take part in the attention mechanism. Specifically, TAP learns average pooling schemes for each token such that the information of potentially important tokens in the neighborhood can adaptively be taken into account. Second, we force the output tokens to aggregate information from a diverse set of input tokens rather than focusing on just a few by using our Attention Diversification Loss (ADL). We achieve this by penalizing high cosine similarity between the attention vectors of different tokens. In experiments, we apply our methods to a wide range of transformer architectures and improve robustness significantly. For example, we improve corruption robustness on ImageNet-C by 2.4% while simultaneously improving accuracy by 0.4% based on state-of-the-art robust architecture FAN. Also, when finetuning on semantic segmentation tasks, we improve robustness on CityScapes-C by 2.4% and ACDC by 3.1%.
翻译:尽管视觉Transformer(ViT)取得了成功,但它们在存在常见的污染(如噪声或模糊)时仍然存在显着的准确度下降。有趣的是,我们观察到ViT的注意力机制往往依赖于很少的重要标记,这种现象被称为标记过度聚焦。更为严重的是,这些标记不具有鲁棒性,常常导致高度分散的注意力模式。本文旨在通过两种通用技术缓解这一过度聚焦问题并使注意力更加稳定:首先,我们的“Token-aware Average Pooling(TAP)”模块鼓励每个标记的本地邻域参与到注意力机制中。具体而言,TAP为每个标记学习平均池化方案,以便邻域中潜在重要的标记信息能够自适应地被纳入考虑。其次,我们强制输出标记从不同输入标记的多样化集合中聚合信息,而不是只关注其中的少数,这是通过我们的“Attention Diversification Loss(ADL)”实现的。我们通过惩罚不同标记的注意力向量之间的高余弦相似度来实现这一目的。在实验中,我们将这些方法应用于各种Transformer架构,并显著提高了鲁棒性。例如,基于最先进的鲁棒架构FAN,我们在ImageNet-C上提高2.4%的污染鲁棒性,同时提高0.4%的准确性。此外,当在语义分割任务上进行微调时,我们在CityScapes-C上提高了2.4%的鲁棒性,在ACDC上提高了3.1%。