We investigate the robustness of vision transformers (ViTs) through the lens of their special patch-based architectural structure, i.e., they process an image as a sequence of image patches. We find that ViTs are surprisingly insensitive to patch-based transformations, even when the transformation largely destroys the original semantics and makes the image unrecognizable by humans. This indicates that ViTs heavily use features that survived such transformations but are generally not indicative of the semantic class to humans. Further investigations show that these features are useful but non-robust, as ViTs trained on them can achieve high in-distribution accuracy, but break down under distribution shifts. From this understanding, we ask: can training the model to rely less on these features improve ViT robustness and out-of-distribution performance? We use the images transformed with our patch-based operations as negatively augmented views and offer losses to regularize the training away from using non-robust features. This is a complementary view to existing research that mostly focuses on augmenting inputs with semantic-preserving transformations to enforce models' invariance. We show that patch-based negative augmentation consistently improves robustness of ViTs across a wide set of ImageNet based robustness benchmarks. Furthermore, we find our patch-based negative augmentation are complementary to traditional (positive) data augmentation, and together boost the performance further. All the code in this work will be open-sourced.
翻译:我们通过基于补丁的特殊建筑结构透镜来调查视觉变压器(ViTs)的稳健性。 我们发现ViTs对基于补丁的变异非常不敏感,即使这种变异在很大程度上摧毁了原有的语义,使图像无法被人类识别。 这表明ViTs大量使用在这种变异中幸存下来的特征,但通常并不显示对人类的语义开放等级。 进一步的调查显示,这些特征是有用的,但非机器人的。 ViTs培训的这些特征可以达到高分布精确度,但可以打破分布变化的顺序。 我们发现,ViTs的变异性对补变异性非常小于这些特征,我们能否用这些变异性来降低这些特征的可靠性,我们使用这种变异性能操作作为负面的视图,并造成损失,使培训远离基于非紫色的语义向人类的开放等级。 这是对现有研究的一种补充性视图,它主要侧重于增加内置性保留性变异性变异性转换的投入, 在整个模型中显示,我们不断修正性变固性变固性变现。