Consistency regularization has been widely studied in recent semi-supervised semantic segmentation methods. Remarkable performance has been achieved, benefiting from image, feature, and network perturbations. To make full use of these perturbations, in this work, we propose a new consistency regularization framework called mutual knowledge distillation (MKD). We innovatively introduce two auxiliary mean-teacher models based on the consistency regularization method. More specifically, we use the pseudo label generated by one mean teacher to supervise the other student network to achieve a mutual knowledge distillation between two branches. In addition to using image-level strong and weak augmentation, we also employ feature augmentation considering implicit semantic distributions to add further perturbations to the students. The proposed framework significantly increases the diversity of the training samples. Extensive experiments on public benchmarks show that our framework outperforms previous state-of-the-art(SOTA) methods under various semi-supervised settings. Code is available at: https://github.com/jianlong-yuan/semi-mmseg.
翻译:在最近半监督的语义分解方法中,对一致性规范化进行了广泛的研究。通过图像、特征和网络扰动,取得了显著的成绩。为了充分利用这些扰动,我们在此工作中提出了一个新的一致性规范化框架,称为相互知识蒸馏(MKD ) 。我们创新地采用了基于一致性规范化方法的两种辅助性平均教师模式。更具体地说,我们使用由一位普通教师生成的假标签来监督其他学生网络,以实现两个分支之间的知识蒸馏。除了使用图像级别强弱的增强,我们还使用特征增强功能,考虑隐含的语义分布,以进一步增加学生的扰动。拟议框架大大增加了培训样本的多样性。关于公共基准的广泛实验表明,我们的框架在各种半监督环境下超越了先前的状态(SOITA)方法。代码见:https://github.com/jianlong-yuan/smi-mmseg。