Recently, semi-supervised semantic segmentation has achieved promising performance with a small fraction of labeled data. However, most existing studies treat all unlabeled data equally and barely consider the differences and training difficulties among unlabeled instances. Differentiating unlabeled instances can promote instance-specific supervision to adapt to the model's evolution dynamically. In this paper, we emphasize the cruciality of instance differences and propose an instance-specific and model-adaptive supervision for semi-supervised semantic segmentation, named iMAS. Relying on the model's performance, iMAS employs a class-weighted symmetric intersection-over-union to evaluate quantitative hardness of each unlabeled instance and supervises the training on unlabeled data in a model-adaptive manner. Specifically, iMAS learns from unlabeled instances progressively by weighing their corresponding consistency losses based on the evaluated hardness. Besides, iMAS dynamically adjusts the augmentation for each instance such that the distortion degree of augmented instances is adapted to the model's generalization capability across the training course. Not integrating additional losses and training procedures, iMAS can obtain remarkable performance gains against current state-of-the-art approaches on segmentation benchmarks under different semi-supervised partition protocols.
翻译:最近,近些年来,半监督的语义分解通过少量标签数据取得了有希望的绩效。然而,大多数现有研究对所有未贴标签的数据一视同仁,很少考虑未贴标签实例之间的差别和培训困难。区分无标签实例可以促进针对具体案例的监督,以动态地适应模型的演变。在本文件中,我们强调实例差异的重要性,并提议对半监督的语义分解(称为 IMAS)进行针对具体案例和模式的适应性监督。根据模型的绩效,IMAS采用了一个等级加权的对称交叉连接系统,以评价每个未贴标签实例的定量硬度,并监督以模型适应方式对未贴标签数据的培训。具体地说,IMAS通过根据评估的硬度对相应的一致性损失进行逐步权衡,从未贴标签的实例中学习。此外,IMAS动态地调整了每个案例的增益度,使增益度适应了整个培训课程的模式的普及能力。在不将额外损失与培训程序相结合的情况下,IMAS在现行分级协议下,根据不同的分级协议取得了显著的业绩收益。