In this work we address multi-target domain adaptation (MTDA) in semantic segmentation, which consists in adapting a single model from an annotated source dataset to multiple unannotated target datasets that differ in their underlying data distributions. To address MTDA, we propose a self-training strategy that employs pseudo-labels to induce cooperation among multiple domain-specific classifiers. We employ feature stylization as an efficient way to generate image views that forms an integral part of self-training. Additionally, to prevent the network from overfitting to noisy pseudo-labels, we devise a rectification strategy that leverages the predictions from different classifiers to estimate the quality of pseudo-labels. Our extensive experiments on numerous settings, based on four different semantic segmentation datasets, validate the effectiveness of the proposed self-training strategy and show that our method outperforms state-of-the-art MTDA approaches. Code available at: https://github.com/Mael-zys/CoaST
翻译:在这项工作中,我们处理语义分解中的多目标域适应(MTDA)问题,其中包括将一个单一模型从附加说明的源数据集调整为多个未加说明的目标数据集,这些模型在基本数据分布方面各有不同。针对MTDA,我们提议了一项自我培训战略,使用假标签促进多个特定域分类人员之间的合作。我们采用特质化,作为生成图像观点的有效方式,构成自我培训的一个组成部分。此外,为了防止网络过度适应噪音的伪标签,我们设计了一项校正战略,利用不同分类人员的预测来估计伪标签的质量。我们在许多环境中进行的广泛实验,以四个不同的语义分解数据集为基础,验证拟议的自我培训战略的有效性,并表明我们的方法优于“现代艺术”MTDA方法。代码见:https://github.com/Mael-zys/CoaST。