Unsupervised domain adaptation (UDA) aims to adapt existing models of the source domain to a new target domain with only unlabeled data. Many adversarial-based UDA methods involve high-instability training and have to carefully tune the optimization procedure. Some non-adversarial UDA methods employ a consistency regularization on the target predictions of a student model and a teacher model under different perturbations, where the teacher shares the same architecture with the student and is updated by the exponential moving average of the student. However, these methods suffer from noticeable negative transfer resulting from either the error-prone discriminator network or the unreasonable teacher model. In this paper, we propose an uncertainty-aware consistency regularization method for cross-domain semantic segmentation. By exploiting the latent uncertainty information of the target samples, more meaningful and reliable knowledge from the teacher model can be transferred to the student model. In addition, we further reveal the reason why the current consistency regularization is often unstable in minimizing the distribution discrepancy. We also show that our method can effectively ease this issue by mining the most reliable and meaningful samples with a dynamic weighting scheme of consistency loss. Experiments demonstrate that the proposed method outperforms the state-of-the-art methods on two domain adaptation benchmarks, $i.e.,$ GTAV $\rightarrow $ Cityscapes and SYNTHIA $\rightarrow $ Cityscapes.
翻译:未经监督的域适应(UDA)旨在将源域的现有模型调整为仅含标签数据的新目标域,许多基于对抗性UDA方法涉及高内存培训,必须仔细调整优化程序。一些非对抗性UDA方法对学生模型的目标预测和不同扰动下的教师模型采用一致性规范,教师与学生使用相同的架构,学生的指数移动平均数更新了这种架构。然而,这些方法由于错误易变歧视者网络或不合理的教师模型而明显出现负转移。在本文件中,我们提出了一种具有不确定性的自觉一致性正规化方法,用于交叉管理语义分割。通过利用目标样本的潜在不确定性信息,可以将教师模型中更有意义和可靠的知识转移到学生模型。此外,我们进一步揭示了当前一致性规范在尽量减少分配差异方面往往不稳定的原因。我们还表明,我们的方法可以有效地缓解这一问题,通过挖掘最可靠和最有意义的样本,并采用动态的美元一致性损失加权计划。实验通过利用目标样本,城市一级GFA-RRRRR法, 展示了城市的拟议方法超越了美元基准。