Vision Transformers (ViTs) and MLPs signal further efforts on replacing hand-wired features or inductive biases with general-purpose neural architectures. Existing works empower the models by massive data, such as large-scale pretraining and/or repeated strong data augmentations, and still report optimization-related problems (e.g., sensitivity to initialization and learning rate). Hence, this paper investigates ViTs and MLP-Mixers from the lens of loss geometry, intending to improve the models' data efficiency at training and generalization at inference. Visualization and Hessian reveal extremely sharp local minima of converged models. By promoting smoothness with a recently proposed sharpness-aware optimizer, we substantially improve the accuracy and robustness of ViTs and MLP-Mixers on various tasks spanning supervised, adversarial, contrastive, and transfer learning (e.g., +5.3\% and +11.0\% top-1 accuracy on ImageNet for ViT-B/16 and Mixer-B/16, respectively, with the simple Inception-style preprocessing). We show that the improved smoothness attributes to sparser active neurons in the first few layers. The resultant ViTs outperform ResNets of similar size and throughput when trained from scratch on ImageNet without large-scale pretraining or strong data augmentations. They also possess more perceptive attention maps.
翻译:因此,本文件从损失几何学的角度对ViTs和MLP-Mixers进行了调查,目的是提高模型在培训和推理一般化方面的数据效率。视觉化和赫森展示了非常尖锐的本地集成模型。我们通过大规模数据增强模型能力,如大规模预培训和(或)多次强大的数据增强,并仍然报告与优化有关的问题(例如,对初始化和学习率的敏感性)。因此,本文件从损失几何学的角度对ViTs和MLP-Mixers, 目的是提高模型在ViT-B/16和Mixer-B/16的图像网络中的数据效率。通过促进与最近提议的敏锐度-觉悟性优化器的平稳度,我们大幅提高ViTs和MLP-Mixers在监督、对抗性、对比性和传输学习方面的各种任务(例如,+5.3 ⁇ 和+1.0 ⁇ 和MLP-Mixer-lass)的精确度。 ViCT-B/16和Mixer-B/16分别展示了非常强的本地迷微缩模型模型模型模型。我们还展示了经过精化的精化的精度,在Slieveption-stal-de-stalmagration-stal rogration-stal listral livestial livestial