Recent studies on semi-supervised semantic segmentation (SSS) have seen fast progress. Despite their promising performance, current state-of-the-art methods tend to increasingly complex designs at the cost of introducing more network components and additional training procedures. Differently, in this work, we follow a standard teacher-student framework and propose AugSeg, a simple and clean approach that focuses mainly on data perturbations to boost the SSS performance. We argue that various data augmentations should be adjusted to better adapt to the semi-supervised scenarios instead of directly applying these techniques from supervised learning. Specifically, we adopt a simplified intensity-based augmentation that selects a random number of data transformations with uniformly sampling distortion strengths from a continuous space. Based on the estimated confidence of the model on different unlabeled samples, we also randomly inject labelled information to augment the unlabeled samples in an adaptive manner. Without bells and whistles, our simple AugSeg can readily achieve new state-of-the-art performance on SSS benchmarks under different partition protocols.
翻译:最近对半监督的语义分解(SSS)的研究取得了迅速的进展。尽管其表现很有希望,但目前最先进的方法往往越来越复杂的设计,其代价是引入更多的网络组件和额外的培训程序。不同的是,在这项工作中,我们遵循一个标准的师生框架,并提出一个简单而干净的方法AugSeg,它主要侧重于数据扰动,以提高SSS的性能。我们主张,各种数据扩增应进行调整,以更好地适应半监督的假想,而不是直接应用监督的学习中的这些技术。具体地说,我们采用了一个简化的基于强度的增强,从一个连续的空间选择随机数个具有统一抽样扭曲力的数据转换。根据不同无标签样本模型的估计信任度,我们还随机输入贴有标签的信息,以便以适应的方式增加无标签的样品。没有钟和哨子,我们简单的AugSeg可以很容易在不同的分区协议下实现SSS基准的新的状态性能。