This work presents a novel approach for semi-supervised semantic segmentation. The key element of this approach is our contrastive learning module that enforces the segmentation network to yield similar pixel-level feature representations for same-class samples across the whole dataset. To achieve this, we maintain a memory bank continuously updated with relevant and high-quality feature vectors from labeled data. In an end-to-end training, the features from both labeled and unlabeled data are optimized to be similar to same-class samples from the memory bank. Our approach outperforms the current state-of-the-art for semi-supervised semantic segmentation and semi-supervised domain adaptation on well-known public benchmarks, with larger improvements on the most challenging scenarios, i.e., less available labeled data. https://github.com/Shathe/SemiSeg-Contrastive
翻译:这项工作为半监督的语义分解提供了一种新颖的方法。 这种方法的关键要素是我们对比式学习模块, 该模块强制实施分解网络, 在整个数据集中为同类样本生成相似的像素级特征表示。 为此, 我们维持一个记忆库, 从标签数据中不断更新相关和高质量的特性矢量。 在端到端培训中, 标签和未标签数据的特点被优化为与记忆库的同类样本相似。 我们的方法优于当前在众所周知的公共基准上对半监督的语义分解和半监督域适应的状态, 在最有挑战性的情况下, 即, 更少的标签数据 。 https:// github. com/ Shathe/SemeSege- Contrastem 。 https:// github. com/ shathe/Semiseg- Contrastastem 。