Vision Transformers are very popular nowadays due to their state-of-the-art performance in several computer vision tasks, such as image classification and action recognition. Although their performance has been greatly enhanced through highly descriptive patch embeddings and hierarchical structures, there is still limited research on utilizing additional data representations so as to refine the self-attention map of a Transformer. To address this problem, a novel attention mechanism, called multi-manifold multi-head attention, is proposed in this work to substitute the vanilla self-attention of a Transformer. The proposed mechanism models the input space in three distinct manifolds, namely Euclidean, Symmetric Positive Definite and Grassmann, thus leveraging different statistical and geometrical properties of the input for the computation of a highly descriptive attention map. In this way, the proposed attention mechanism can guide a Vision Transformer to become more attentive towards important appearance, color and texture features of an image, leading to improved classification results, as shown by the experimental results on well-known image classification datasets.
翻译:目前,由于在图像分类和行动识别等若干计算机视觉任务中表现最先进的视觉转换器非常受欢迎,尽管通过高度描述性补丁嵌入和等级结构大大提高了它们的性能,但对于如何利用更多的数据表述来完善一个变异器的自我注意地图的研究仍然有限。为了解决这一问题,在这项工作中提议了一个新的关注机制,称为多层多头关注机制,以取代变异器的香草自知。拟议的机制模拟三个不同的多元体,即Euclidean、Symmmmmectric Debinite和Grassmann的输入空间,从而利用投入的不同统计和几何特性来计算高度描述性注意的地图。在这种方式下,拟议的关注机制可以引导一个变异器更加关注一个图像的重要外观、颜色和纹理特征,从而导致更好的分类结果,正如著名图像分类数据集的实验结果所显示的那样。