Vision transformer has demonstrated promising performance on challenging computer vision tasks. However, directly training the vision transformers may yield unstable and sub-optimal results. Recent works propose to improve the performance of the vision transformers by modifying the transformer structures, e.g., incorporating convolution layers. In contrast, we investigate an orthogonal approach to stabilize the vision transformer training without modifying the networks. We observe the instability of the training can be attributed to the significant similarity across the extracted patch representations. More specifically, for deep vision transformers, the self-attention blocks tend to map different patches into similar latent representations, yielding information loss and performance degradation. To alleviate this problem, in this work, we introduce novel loss functions in vision transformer training to explicitly encourage diversity across patch representations for more discriminative feature extraction. We empirically show that our proposed techniques stabilize the training and allow us to train wider and deeper vision transformers. We further show the diversified features significantly benefit the downstream tasks in transfer learning. For semantic segmentation, we enhance the state-of-the-art (SOTA) results on Cityscapes and ADE20k. Our code is available at https://github.com/ChengyueGongR/PatchVisionTransformer.
翻译:然而,直接培训视觉变压器可能会产生不稳定和亚最佳的结果。最近的工作提议通过修改变压器结构来改善视觉变压器的性能,例如,结合变压层。相反,我们调查了一种正统方法,以稳定视觉变压器培训,而不改变网络。我们观察到,培训的不稳定性可归因于在提取的补丁展示中具有很大的相似性。更具体地说,对于深视变压器而言,自我注意区块倾向于将不同的补丁映射成相似的潜在表现,从而造成信息丢失和性能退化。为了缓解这一问题,我们在这项工作中引入了视觉变压器培训中的新损失功能,以明确鼓励在更具有歧视性的特征提取方面进行跨宽宽度的描述。我们从经验上表明,我们拟议的技术稳定了培训,并使我们能够培训更广泛和更深的视觉变压器。我们进一步展示了转移学习的下游任务的多样性。对于深视变压器的分化,我们加强了城市景色图和REDE20K。我们的代码可以在 http://Visqualfrieval/TRANSG。我们可以在 http://ADRAVISG.