Weakly supervised semantic segmentation is receiving great attention due to its low human annotation cost. In this paper, we aim to tackle bounding box supervised semantic segmentation, i.e., training accurate semantic segmentation models using bounding box annotations as supervision. To this end, we propose Affinity Attention Graph Neural Network ($A^2$GNN). Following previous practices, we first generate pseudo semantic-aware seeds, which are then formed into semantic graphs based on our newly proposed affinity Convolutional Neural Network (CNN). Then the built graphs are input to our $A^2$GNN, in which an affinity attention layer is designed to acquire the short- and long- distance information from soft graph edges to accurately propagate semantic labels from the confident seeds to the unlabeled pixels. However, to guarantee the precision of the seeds, we only adopt a limited number of confident pixel seed labels for $A^2$GNN, which may lead to insufficient supervision for training. To alleviate this issue, we further introduce a new loss function and a consistency-checking mechanism to leverage the bounding box constraint, so that more reliable guidance can be included for the model optimization. Experiments show that our approach achieves new state-of-the-art performances on Pascal VOC 2012 datasets (val: 76.5\%, test: 75.2\%). More importantly, our approach can be readily applied to bounding box supervised instance segmentation task or other weakly supervised semantic segmentation tasks, with state-of-the-art or comparable performance among almot all weakly supervised tasks on PASCAL VOC or COCO dataset. Our source code will be available at https://github.com/zbf1991/A2GNN.
翻译:受到微弱监管的语义分解因人类笔记成本低而正受到极大关注。 在本文中, 我们的目标是处理捆绑框中受监管的语义分解, 即使用约束框注释作为监管, 培训准确的语义分解模型。 为此, 我们提议“ 亲和关注” 图像神经网络 (A=2$GNN) 。 根据以往的做法, 我们首先生成假语义识别种子, 这些种子随后根据我们新提议的“ 亲和” 保守神经网络( CNN) 组成了语义图。 然后, 构建的图表被输入到我们的$A2$A2$的语义分解分解中, 也就是用“ 亲和” 从软图形边缘获得短长的语义分解信息, 以准确传播从自信种子到未贴标签的像素的语义标签。 然而, 为了保证种子的精确性, 我们只能采用数量有限的“ 软和” P=2“ 保守” 的语系种子标签, 或者说“ 全部的国税/ “ GNNNNN, ”,,, 这可能会导致对培训监督不足的问题。 为了缓解问题, 我们进一步引入新的数据分解, 我们进一步引入了一个新的数据分解。