Deep neural networks (DNNs) have witnessed great successes in semantic segmentation, which requires a large number of labeled data for training. We present a novel learning framework called Uncertainty guided Cross-head Co-training (UCC) for semi-supervised semantic segmentation. Our framework introduces weak and strong augmentations within a shared encoder to achieve co-training, which naturally combines the benefits of consistency and self-training. Every segmentation head interacts with its peers and, the weak augmentation result is used for supervising the strong. The consistency training samples' diversity can be boosted by Dynamic Cross-Set Copy-Paste (DCSCP), which also alleviates the distribution mismatch and class imbalance problems. Moreover, our proposed Uncertainty Guided Re-weight Module (UGRM) enhances the self-training pseudo labels by suppressing the effect of the low-quality pseudo labels from its peer via modeling uncertainty. Extensive experiments on Cityscapes and PASCAL VOC 2012 demonstrate the effectiveness of our UCC. Our approach significantly outperforms other state-of-the-art semi-supervised semantic segmentation methods. It achieves 77.17$\%$, 76.49$\%$ mIoU on Cityscapes and PASCAL VOC 2012 datasets respectively under 1/16 protocols, which are +10.1$\%$, +7.91$\%$ better than the supervised baseline.
翻译:深心神经网络(DNNS)在语义分割方面取得了巨大成功,这需要大量有标签的数据用于培训。我们为半监督的语义分割提供了名为“不确定引导跨头共同培训”的新学习框架(UCC),为半监督的语义分割提供了新的学习框架。我们的框架在一个共享编码器中引入了薄弱和强大的增强功能,以实现共同培训,这自然结合了一致性和自我培训的好处。每个分解头都与其同行互动,而弱增量结果被用来监督强力。一致性培训样本的多样性可以通过动态跨Set复制版(DCSCP)得到增强,这也缓解了分布不匹配和阶级不平衡的问题。此外,我们提议的“不稳引导再重量模块(UGRM)”通过模拟不确定性抑制同行低质量假标签的影响,从而强化了自我培训的假标签。关于城市景色和PASAL VOC的大规模实验展示了我们的UCC的有效性。我们的方法大大优于其他州+美元基数,在2012年1美元半监督协议(I)和SASASA)下分别实现了76美元基段。