Unsupervised domain adaptation (UDA) aims to adapt existing models of the source domain to a new target domain with only unlabeled data. Most existing methods suffer from noticeable negative transfer resulting from either the error-prone discriminator network or the unreasonable teacher model. Besides, the local regional consistency in UDA has been largely neglected, and only extracting the global-level pattern information is not powerful enough for feature alignment due to the abuse use of contexts. To this end, we propose an uncertainty-aware consistency regularization method for cross-domain semantic segmentation. Firstly, we introduce an uncertainty-guided consistency loss with a dynamic weighting scheme by exploiting the latent uncertainty information of the target samples. As such, more meaningful and reliable knowledge from the teacher model can be transferred to the student model. We further reveal the reason why the current consistency regularization is often unstable in minimizing the domain discrepancy. Besides, we design a ClassDrop mask generation algorithm to produce strong class-wise perturbations. Guided by this mask, we propose a ClassOut strategy to realize effective regional consistency in a fine-grained manner. Experiments demonstrate that our method outperforms the state-of-the-art methods on four domain adaptation benchmarks, i.e., GTAV $\rightarrow $ Cityscapes and SYNTHIA $\rightarrow $ Cityscapes, Virtual KITTI $\rightarrow$ KITTI and Cityscapes $\rightarrow$ KITTI.
翻译:不受监督的域适应(UDA)旨在将源域的现有模型调整为仅含标签数据的新目标域,大多数现有方法都因错误易变歧视者网络或不合理的教师模式而出现明显的负面转移。此外,UDA的当地区域一致性在很大程度上被忽视,而只是提取全球范围的模式信息,不足以因滥用环境而导致特征调整。为此,我们提议了一种具有不确定性的自觉一致性规范法,用于跨部语义分割。首先,我们通过利用目标样本的潜在不确定性信息,采用动态加权制,引入了不确定性引导一致性损失。因此,教师模式中更有意义和可靠的知识可以转移到学生模式。我们进一步揭示了为什么目前的一致性调整往往在尽可能缩小域差异方面不稳定的原因。此外,我们设计了一种等级Drop 掩码生成算法,以产生强烈的等级偏差。我们提议了一种等级分析战略,以精细度的方式实现区域一致性。 实验了我们的方法超越了城市的KIST-IAR 和 AL AL AL 美元 基数。