Unsupervised Domain Adaptation (UDA) aims to adapt models trained on a source domain to a new target domain where no labelled data is available. In this work, we investigate the problem of UDA from a synthetic computer-generated domain to a similar but real-world domain for learning semantic segmentation. We propose a semantically consistent image-to-image translation method in combination with a consistency regularisation method for UDA. We overcome previous limitations on transferring synthetic images to real looking images. We leverage pseudo-labels in order to learn a generative image-to-image translation model that receives additional feedback from semantic labels on both domains. Our method outperforms state-of-the-art methods that combine image-to-image translation and semi-supervised learning on relevant domain adaption benchmarks, i.e., on GTA5 to Cityscapes and SYNTHIA to Cityscapes.
翻译:无监督域适应(UDA) 旨在将源域培训的模型适应到没有贴标签数据的新目标域。 在这项工作中,我们调查UDA问题,从合成计算机生成域到类似但真实的域,学习语义分化。我们建议采用一个语义一致的图像到图像翻译方法,结合UDA一致性的规范化方法。我们克服了以往将合成图像转换到真实图像的限制。我们利用假标签学习一个基因化图像到图像的翻译模型,从两个域的语义标签获得更多反馈。我们的方法超越了将图像到图像翻译和半监督学习相结合的先进方法,这些方法结合了相关域的适应基准,即GTA5到城市景点和SYNTHIA到城市景点。