The transformer models have shown promising effectiveness in dealing with various vision tasks. However, compared with training Convolutional Neural Network (CNN) models, training Vision Transformer (ViT) models is more difficult and relies on the large-scale training set. To explain this observation we make a hypothesis that \textit{ViT models are less effective in capturing the high-frequency components of images than CNN models}, and verify it by a frequency analysis. Inspired by this finding, we first investigate the effects of existing techniques for improving ViT models from a new frequency perspective, and find that the success of some techniques (e.g., RandAugment) can be attributed to the better usage of the high-frequency components. Then, to compensate for this insufficient ability of ViT models, we propose HAT, which directly augments high-frequency components of images via adversarial training. We show that HAT can consistently boost the performance of various ViT models (e.g., +1.2% for ViT-B, +0.5% for Swin-B), and especially enhance the advanced model VOLO-D5 to 87.3% that only uses ImageNet-1K data, and the superiority can also be maintained on out-of-distribution data and transferred to downstream tasks. The code is available at: https://github.com/jiawangbai/HAT.
翻译:变压器模型在处理各种视觉任务方面表现出了有希望的实效。然而,与培训革命神经网络(CNN)模型相比,培训视觉变异器(VIT)模型更加困难,并依赖于大型培训集。为了解释这一观察,我们假设,\textit{VIT模型在捕捉图像高频组成部分方面不如CNN模型那么有效,并通过频率分析加以核实。根据这一发现,我们首先从新的频率角度调查现有技术对改进VIT模型的影响,发现某些技术(例如RandAugment)的成功可归因于对高频组件的更好使用。然后,为了弥补VIT模型的这种不足能力,我们建议HAT模型通过对抗性培训直接增加图像高频组成部分。我们表明,HAT可以持续提高各种VIT模型的性能(例如,VIT-Wang-B+1.2%,Swin-B+0.5%),特别是加强高级模型(例如,RandAU-D5)的成功可归因于高频部分。我们建议HAT-SilveralityNet只能将数据传送到87.