The consistency loss has played a key role in solving problems in recent studies on semi-supervised learning. Yet extant studies with the consistency loss are limited to its application to classification tasks; extant studies on semi-supervised semantic segmentation rely on pixel-wise classification, which does not reflect the structured nature of characteristics in prediction. We propose a structured consistency loss to address this limitation of extant studies. Structured consistency loss promotes consistency in inter-pixel similarity between teacher and student networks. Specifically, collaboration with CutMix optimizes the efficient performance of semi-supervised semantic segmentation with structured consistency loss by reducing computational burden dramatically. The superiority of proposed method is verified with the Cityscapes; The Cityscapes benchmark results with validation and with test data are 81.9 mIoU and 83.84 mIoU respectively. This ranks the first place on the pixel-level semantic labeling task of Cityscapes benchmark suite. To the best of our knowledge, we are the first to present the superiority of state-of-the-art semi-supervised learning in semantic segmentation.
翻译:一致性损失在解决最近半监督学习研究中的问题方面发挥了关键作用。但与一致性损失有关的现有研究仅限于对分类任务的应用;关于半监督的语义分解的现有研究依赖于像素类的分类,这种分类没有反映预测中特性的结构性质。我们建议为处理现有研究的这一局限性而造成结构性一致性损失。结构一致性损失促进了教师和学生网络之间等分级的一致性。具体地说,与CutMix合作,通过大幅降低计算负担,优化半监督的语义分解的高效性能和结构一致性损失。拟议方法的优越性与城市景观核实;城市景观基准结果与验证和测试数据分别是81.9 mIoU和83.84 mIoU。这在城市景观基准套件的像素级语义级定型标定任务中名列第一。据我们所知,我们是第一个展示在语义分解中最先进的州级半监督性学习的优势。