This paper explores the properties of the plain Vision Transformer (ViT) for Weakly-supervised Semantic Segmentation (WSSS). The class activation map (CAM) is of critical importance for understanding a classification network and launching WSSS. We observe that different attention heads of ViT focus on different image areas. Thus a novel weight-based method is proposed to end-to-end estimate the importance of attention heads, while the self-attention maps are adaptively fused for high-quality CAM results that tend to have more complete objects. Besides, we propose a ViT-based gradient clipping decoder for online retraining with the CAM results to complete the WSSS task. We name this plain Transformer-based Weakly-supervised learning framework WeakTr. It achieves the state-of-the-art WSSS performance on standard benchmarks, i.e., 78.4% mIoU on the val set of PASCAL VOC 2012 and 50.3% mIoU on the val set of COCO 2014. Code is available at https://github.com/hustvl/WeakTr.
翻译:WeakTr:探索平凡的视觉Transformer在弱监督语义分割中的应用
翻译后的摘要:
本文探讨了平凡的视觉Transformer(ViT)在弱监督语义分割(WSSS)中的应用。类激活图(CAM)对于理解分类网络和启动WSSS至关重要。我们观察到ViT的不同注意力头集中于不同的图像区域。因此,提出了一种基于权重的方法,以端到端地估计注意力头的重要性,同时自适应地融合自注意力图以实现具有更完整对象的高质量CAM结果。此外,我们提出了一种基于ViT的梯度裁剪解码器,通过CAM结果进行在线再训练以完成WSSS任务。我们将基于平凡Transformer的弱监督学习框架称为WeakTr。它在标准基准测试中实现了最先进的WSSS性能,即在PASCAL VOC 2012的val集上为78.4% mIoU,在COCO 2014的val集上为50.3% mIoU。代码可在https://github.com/hustvl/WeakTr上获得。