Consistency regularization has been widely studied in recent semi-supervised semantic segmentation methods. Remarkable performance has been achieved, benefiting from image, feature, and network perturbations. To make full use of these perturbations, in this work, we propose a new consistency regularization framework called mutual knowledge distillation (MKD). We innovatively introduce two auxiliary mean-teacher models based on the consistency regularization method. More specifically, we use the pseudo label generated by one mean teacher to supervise the other student network to achieve a mutual knowledge distillation between two branches. In addition to using image-level strong and weak augmentation, we also employ feature augmentation considering implicit semantic distributions to add further perturbations to the students. The proposed framework significantly increases the diversity of the training samples. Extensive experiments on public benchmarks show that our framework outperforms previous state-of-the-art(SOTA) methods under various semi-supervised settings.
翻译:在最近半监督的语义分解方法中,对一致性规范化进行了广泛的研究。通过图像、特征和网络扰动,实现了显著的绩效。为了充分利用这些扰动,我们在此工作中提出了一个新的一致性规范化框架,称为相互知识蒸馏(MKD ) 。我们创新地采用了基于一致性规范化方法的两种辅助性中性教师模式。更具体地说,我们使用一个中性教师产生的假标签来监督另一个学生网络,以实现两个分支之间的相互知识蒸馏。除了使用图像级别强弱的增强外,我们还利用特征增强,考虑隐含的语义分布来进一步增加学生的扰动。拟议框架极大地提高了培训样本的多样性。关于公共基准的广泛实验表明,我们的框架在各种半监督环境下超越了先前的状态(SOTA)方法。