Fine-grained visual classification (FGVC) which aims at recognizing objects from subcategories is a very challenging task due to the inherently subtle inter-class differences. Recent works mainly tackle this problem by focusing on how to locate the most discriminative image regions and rely on them to improve the capability of networks to capture subtle variances. Most of these works achieve this by re-using the backbone network to extract features of selected regions. However, this strategy inevitably complicates the pipeline and pushes the proposed regions to contain most parts of the objects. Recently, vision transformer (ViT) shows its strong performance in the traditional classification task. The self-attention mechanism of the transformer links every patch token to the classification token. The strength of the attention link can be intuitively considered as an indicator of the importance of tokens. In this work, we propose a novel transformer-based framework TransFG where we integrate all raw attention weights of the transformer into an attention map for guiding the network to effectively and accurately select discriminative image patches and compute their relations. A contrastive loss is applied to further enlarge the distance between feature representations of similar sub-classes. We demonstrate the value of TransFG by conducting experiments on five popular fine-grained benchmarks: CUB-200-2011, Stanford Cars, Stanford Dogs, NABirds and iNat2017 where we achieve state-of-the-art performance. Qualitative results are presented for better understanding of our model.
翻译:精细的视觉分类(FGVC)旨在识别子类的物体,这是一项非常艰巨的任务,因为具有内在微妙的阶级间差异。最近的工作主要通过侧重于如何定位最具歧视性的图像区域并依靠这些区域来提高网络捕捉微妙差异的能力来解决这个问题。这些工作大多通过重新使用主干网络来获取选定区域的特征来实现这一点。然而,这一战略不可避免地使管道复杂化,迫使拟议区域包含大部分对象。最近,视觉变异器(VT)显示了其在传统分类任务中的强效表现。变异器的自我感知机制将每个补丁都链接到分类符号上。关注链接的力量可以直观地被视为网络捕捉微妙差异的能力。在这项工作中,我们提出了一个新的基于主干网的变异器框架TransFG,将变异器的所有原始关注重量整合成一个关注模型,用以指导网络有效和准确地选择具有歧视性的图像补丁并配置它们的关系。一个对比性损失机制用于进一步扩大100级纸质的卡斯特洛夫(NADRAB)分级(C-BISBA)测试5级的比标标的比值之间的距离。我们用更精确地展示了20 Stabial- Stabial- Stabial- Stabial-Stal-Stal-Stal-Bal-Bal-Bal-Bal-Bal-B-Sal-Sal-I-B-SB-SB-SB-SAL-SB-SB-SB-SB-SB-SB-SB-SBal Q。