Inspired by the tremendous success of the self-attention mechanism in natural language processing, the Vision Transformer (ViT) creatively applies it to image patch sequences and achieves incredible performance. However, the scaled dot-product self-attention of ViT brings about scale ambiguity to the structure of the original feature space. To address this problem, we propose a novel method named Orthogonal Vision Transformer (O-ViT), to optimize ViT from the geometric perspective. O-ViT limits parameters of self-attention blocks to be on the norm-keeping orthogonal manifold, which can keep the geometry of the feature space. Moreover, O-ViT achieves both orthogonal constraints and cheap optimization overhead by adopting a surjective mapping between the orthogonal group and its Lie algebra.We have conducted comparative experiments on image recognition tasks to demonstrate O-ViT's validity and experiments show that O-ViT can boost the performance of ViT by up to 3.6%.
翻译:在自然语言处理过程中自我注意机制的巨大成功激励下,视觉变异器(VIT)创造性地将它应用于图像补丁序列并取得令人难以置信的性能。然而,对VIT的缩放点产品自我注意给原始地貌空间的结构带来了规模模糊性。为了解决这个问题,我们提议了一个名为Orthogonal Vision变异器(O-Vit)的新颖方法,从几何角度优化Vit。O-ViT自我注意区块的限值参数将放在规范保持或体积的方块上,这可以保持地貌空间的几何性。此外,O-ViT通过在正方形组与其Lie 代数组之间进行推测性绘图,既能达到正方形限制,又能实现廉价的优化。我们在图像识别任务上进行了比较实验,以证明O-ViT的有效性和实验显示O-ViT能够将VIT的性能提升到3.6%。