Deep networks trained on the source domain show degraded performance when tested on unseen target domain data. To enhance the model's generalization ability, most existing domain generalization methods learn domain invariant features by suppressing domain sensitive features. Different from them, we propose a Domain Projection and Contrastive Learning (DPCL) approach for generalized semantic segmentation, which includes two modules: Self-supervised Source Domain Projection (SSDP) and Multi-level Contrastive Learning (MLCL). SSDP aims to reduce domain gap by projecting data to the source domain, while MLCL is a learning scheme to learn discriminative and generalizable features on the projected data. During test time, we first project the target data by SSDP to mitigate domain shift, then generate the segmentation results by the learned segmentation network based on MLCL. At test time, we can update the projected data by minimizing our proposed pixel-to-pixel contrastive loss to obtain better results. Extensive experiments for semantic segmentation demonstrate the favorable generalization capability of our method on benchmark datasets.
翻译:在源域上培训的深网络显示,在对无形目标域数据进行测试时,其性能会退化。为了提高模型的概括化能力,大多数现有的一般化方法都通过压制域敏感特性来学习域内差异性特征。我们建议采用“域预测和对比学习”方法(DPCL),用于一般语义分解,其中包括两个模块:自监督源域预测和多级反差学习。SSDP旨在通过向源域投射数据来缩小域间差距,而LOL则是学习在预测数据上学习有区别和可概括性特征的学习计划。在试验期间,我们首先预测SSDP的目标数据以减缓域转移,然后根据基于LCMCL的学习分解网络产生分解结果。在试验期间,我们可以通过尽量减少我们提议的像素到象素之间的对比性对比性损失来更新预测的数据,以获得更好的结果。精细的分化实验显示了我们在基准数据集方面采用的方法的有利性一般化能力。</s>