Weakly-supervised instance segmentation (WSIS) has been considered as a more challenging task than weakly-supervised semantic segmentation (WSSS). Compared to WSSS, WSIS requires instance-wise localization, which is difficult to extract from image-level labels. To tackle the problem, most WSIS approaches use off-the-shelf proposal techniques that require pre-training with instance or object level labels, deviating the fundamental definition of the fully-image-level supervised setting. In this paper, we propose a novel approach including two innovative components. First, we propose a semantic knowledge transfer to obtain pseudo instance labels by transferring the knowledge of WSSS to WSIS while eliminating the need for the off-the-shelf proposals. Second, we propose a self-refinement method to refine the pseudo instance labels in a self-supervised scheme and to use the refined labels for training in an online manner. Here, we discover an erroneous phenomenon, semantic drift, that occurred by the missing instances in pseudo instance labels categorized as background class. This semantic drift occurs confusion between background and instance in training and consequently degrades the segmentation performance. We term this problem as semantic drift problem and show that our proposed self-refinement method eliminates the semantic drift problem. The extensive experiments on PASCAL VOC 2012 and MS COCO demonstrate the effectiveness of our approach, and we achieve a considerable performance without off-the-shelf proposal techniques. The code will be available soon.
翻译:与 WSSS 相比, WISS 需要以实例和对象等级标签进行微弱监督的图像分割(WISS) 。 与 WSSS 相比, WISS 需要以实例为基础的本地化(这是很难从图像级别标签中提取的) 。 解决这个问题,大多数 WIS 都采用需要用实例或目标级别标签进行预培训的现成建议技术,从而偏离了全图像级别监督设置的基本定义。 在本文中,我们提出了一种新颖的方法,包括两个创新组成部分。 首先,我们建议通过将 WSS 的知识转移给 WISS, 以获取假的示例标签。 与 WSS 相比, 需要以实例为基础的本地化本地化本地化本地化, 很难从图像级别上提取到本地化的本地化。 其次, 我们提出了一种自我完善的方法来完善假冒的虚拟化标签标签, VRSA, 我们发现了一种错误的现象, 语义性化的流化方法, 并由此消除了我们提出的本地化和内部流化的方法。