It is commonly accepted that the Vision Transformer model requires sophisticated regularization techniques to excel at ImageNet-1k scale data. Surprisingly, we find this is not the case and standard data augmentation is sufficient. This note presents a few minor modifications to the original Vision Transformer (ViT) vanilla training setting that dramatically improve the performance of plain ViT models. Notably, 90 epochs of training surpass 76% top-1 accuracy in under seven hours on a TPUv3-8, similar to the classic ResNet50 baseline, and 300 epochs of training reach 80% in less than one day.
翻译:人们普遍认为,愿景变换模型需要复杂的正规化技术才能在图像Net-1k比例数据中取得优异成绩。 令人惊讶的是,我们发现情况并非如此,标准数据扩增已经足够。 本说明对最初的愿景变换器(VIT)香草培训环境做了一些小的修改,大大改善了普通Vit模型的性能。 值得注意的是,90个培训时代在7小时内在TPU3-8上超过76%的最高-1精度,类似于经典的ResNet50基线,300个培训时代在不到一天的时间内达到80%。