This paper describes a method of domain adaptive training for semantic segmentation using multiple source datasets that are not necessarily relevant to the target dataset. We propose a soft pseudo-label generation method by integrating predicted object probabilities from multiple source models. The prediction of each source model is weighted based on the estimated domain similarity between the source and the target datasets to emphasize contribution of a model trained on a source that is more similar to the target and generate reasonable pseudo-labels. We also propose a training method using the soft pseudo-labels considering their entropy to fully exploit information from the source datasets while suppressing the influence of possibly misclassified pixels. The experiments show comparative or better performance than our previous work and another existing multi-source domain adaptation method, and applicability to a variety of target environments.
翻译:本文介绍了使用与目标数据集不一定相关的多个源数据集进行语义分解的域性适应培训的方法。我们建议采用软伪标签生成方法,将多个源模型的预测对象概率整合在一起。对每种源模型的预测基于源和目标数据集的估计域相似性,以强调就更接近目标源的源受过培训的模型的贡献,并产生合理的假标签。我们还提议了一种培训方法,使用软伪标签,考虑其酶盘充分利用来源数据集的信息,同时抑制可能分类错误的像素的影响。实验显示比我们以往的工作和另一种现有多源域适应方法的比较性或更好的性能,并显示对各种目标环境的适用性。</s>