Although deep neural networks have achieved remarkable results for the task of semantic segmentation, they usually fail to generalize towards new domains, especially when performing synthetic-to-real adaptation. Such domain shift is particularly noticeable along class boundaries, invalidating one of the main goals of semantic segmentation that consists in obtaining sharp segmentation masks. In this work, we specifically address this core problem in the context of Unsupervised Domain Adaptation and present a novel low-level adaptation strategy that allows us to obtain sharp predictions. Moreover, inspired by recent self-training techniques, we introduce an effective data augmentation that alleviates the noise typically present at semantic boundaries when employing pseudo-labels for self-training. Our contributions can be easily integrated into other popular adaptation frameworks, and extensive experiments show that they effectively improve performance along class boundaries.
翻译:尽管深神经网络在语义分解任务方面取得了显著成果,但它们通常未能向新的领域推广,特别是在进行合成到现实的适应时。这种领域转移在阶级边界一带特别明显,使包含获得锐利分解面罩的语义分解的主要目标之一无效。 在这项工作中,我们特别在无人监督的域域适应的背景下解决这一核心问题,并提出了一项新的低层次适应战略,使我们能够获得清晰的预测。此外,在近期自我培训技术的启发下,我们引入了有效的数据扩增,以缓解在使用假标签进行自我培训时通常存在于语义边界上的噪音。 我们的贡献可以很容易地融入其他流行的适应框架,并且广泛的实验表明它们有效地改善了在阶级边界上的绩效。