Fine-grained visual classification (FGVC) which aims at recognizing objects from subcategories is a very challenging task due to the inherently subtle inter-class differences. Recent works mainly tackle this problem by focusing on how to locate the most discriminative image regions and rely on them to improve the capability of networks to capture subtle variances. Most of these works achieve this by re-using the backbone network to extract features of selected regions. However, this strategy inevitably complicates the pipeline and pushes the proposed regions to contain most parts of the objects. Recently, vision transformer (ViT) shows its strong performance in the traditional classification task. The self-attention mechanism of the transformer links every patch token to the classification token. The strength of the attention link can be intuitively considered as an indicator of the importance of tokens. In this work, we propose a novel transformer-based framework TransFG where we integrate all raw attention weights of the transformer into an attention map for guiding the network to effectively and accurately select discriminative image patches and compute their relations. A contrastive loss is applied to further enlarge the distance between feature representations of similar sub-classes. We demonstrate the value of TransFG by conducting experiments on five popular fine-grained benchmarks: CUB-200-2011, Stanford Cars, Stanford Dogs, NABirds and iNat2017 where we achieve state-of-the-art performance. Qualitative results are presented for better understanding of our model.


翻译:精细的视觉分类(FGVC)旨在识别子类的物体,这是一项非常艰巨的任务,因为具有内在微妙的阶级间差异。最近的工作主要通过侧重于如何定位最具歧视性的图像区域并依靠这些区域来提高网络捕捉微妙差异的能力来解决这个问题。这些工作大多通过重新使用主干网络来获取选定区域的特征来实现这一点。然而,这一战略不可避免地使管道复杂化,迫使拟议区域包含大部分对象。最近,视觉变异器(VT)显示了其在传统分类任务中的强效表现。变异器的自我感知机制将每个补丁都链接到分类符号上。关注链接的力量可以直观地被视为网络捕捉微妙差异的能力。在这项工作中,我们提出了一个新的基于主干网的变异器框架TransFG,将变异器的所有原始关注重量整合成一个关注模型,用以指导网络有效和准确地选择具有歧视性的图像补丁并配置它们的关系。一个对比性损失机制用于进一步扩大100级纸质的卡斯特洛夫(NADRAB)分级(C-BISBA)测试5级的比标标的比值之间的距离。我们用更精确地展示了20 Stabial- Stabial- Stabial- Stabial-Stal-Stal-Stal-Bal-Bal-Bal-Bal-Bal-B-Sal-Sal-I-B-SB-SB-SB-SAL-SB-SB-SB-SB-SB-SB-SB-SBal Q。

0
下载
关闭预览

相关内容

零样本文本分类,Zero-Shot Learning for Text Classification
专知会员服务
95+阅读 · 2020年5月31日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
58+阅读 · 2019年10月17日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
meta learning 17年:MAML SNAIL
CreateAMind
11+阅读 · 2019年1月2日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
条件GAN重大改进!cGANs with Projection Discriminator
CreateAMind
8+阅读 · 2018年2月7日
Arxiv
20+阅读 · 2020年6月8日
Equalization Loss for Long-Tailed Object Recognition
Arxiv
5+阅读 · 2020年4月14日
Arxiv
5+阅读 · 2020年3月17日
VIP会员
Top
微信扫码咨询专知VIP会员