Previous feature alignment methods in Unsupervised domain adaptation(UDA) mostly only align global features without considering the mismatch between class-wise features. In this work, we propose a new coarse-to-fine feature alignment method using contrastive learning called CFContra. It draws class-wise features closer than coarse feature alignment or class-wise feature alignment only, therefore improves the model's performance to a great extent. We build it upon one of the most effective methods of UDA called entropy minimization to further improve performance. In particular, to prevent excessive memory occupation when applying contrastive loss in semantic segmentation, we devise a new way to build and update the memory bank. In this way, we make the algorithm more efficient and viable with limited memory. Extensive experiments show the effectiveness of our method and model trained on the GTA5 to Cityscapes dataset has boost mIOU by 3.5 compared to the MinEnt algorithm. Our code will be publicly available.
翻译:在未受监督的域适应(UDA) 中, 先前的特性校正方法大多只是调整全球特性, 而不考虑等级特性之间的不匹配。 在这项工作中, 我们提出一种新的粗对齐特性校正方法, 使用对比性学习方法, 称为 CFContra 。 它绘制的类比特征校正比粗化校正或类比对称更近一些, 因此可以大大改善模型的性能。 我们把它建在UDA最有效的方法之一上, 叫做 最小化, 以进一步提高性能。 特别是, 在使用语义分隔法的对比性损失时, 我们想出一个新的构建和更新记忆库的方法。 这样, 我们用有限的内存来使算法更高效、更可行。 广泛的实验显示我们在 GTA5 到 Cityscovers Data Set 上训练的方法和模型的有效性, 与 Minnt 算法相比, 我们的代码将会被公开使用, 。