Unsupervised Domain Adaptation (UDA) aims to adapt the model trained on the labeled source domain to an unlabeled target domain. In this paper, we present Prototypical Contrast Adaptation (ProCA), a simple and efficient contrastive learning method for unsupervised domain adaptive semantic segmentation. Previous domain adaptation methods merely consider the alignment of the intra-class representational distributions across various domains, while the inter-class structural relationship is insufficiently explored, resulting in the aligned representations on the target domain might not be as easily discriminated as done on the source domain anymore. Instead, ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation. By considering the same class prototypes as positives and other class prototypes as negatives to achieve class-centered distribution alignment, ProCA achieves state-of-the-art performance on classical domain adaptation tasks, {\em i.e., GTA5 $\to$ Cityscapes \text{and} SYNTHIA $\to$ Cityscapes}. Code is available at \href{https://github.com/jiangzhengkai/ProCA}{ProCA}
翻译:未经监督的域域适应(UDA) 旨在将在标签源域上培训的模型调整为未贴标签的目标域。在本文件中,我们介绍了Protomid Contrast Aditation (ProCA),这是一个简单而高效的对比学习方法,用于未经监督的域适应语义分割。 以前的域适应方法仅仅考虑不同域间分类内部代表性分布的一致,而分类间结构关系没有得到充分探讨,导致目标域的一致表述可能不再象源域那样容易区分。 相反, ProCA 将类间信息纳入类内原型,并采用以类为中心的分配调整。 通过将同类原型与正数和其他类原型视为正数,实现类中心分配协调,Procheng@href}/ARCA/ARCA@/AGISGIQ{/CA/CAQA/CA/CAGIFF}