Synthetic data has been applied in many deep learning based computer vision tasks. Limited performance of algorithms trained solely on synthetic data has been approached with domain adaptation techniques such as the ones based on generative adversarial framework. We demonstrate how adversarial training alone can introduce semantic inconsistencies in translated images. To tackle this issue we propose density prematching strategy using KLIEP-based density ratio estimation procedure. Finally, we show that aforementioned strategy improves quality of translated images of underlying method and their usability for the semantic segmentation task in the context of autonomous driving.
翻译:合成数据已应用于许多基于深层学习的计算机愿景任务; 仅以合成数据进行培训的算法的有限性能已经与领域适应技术(如基于基因对抗框架的技术)进行了接触; 我们证明,单靠对抗性培训可如何在翻译图像中引入语义不一致; 为解决这一问题,我们提出使用基于KLIEP的密度比率估计程序进行密度预配战略; 最后,我们表明,上述战略提高了基本方法的翻译图像的质量,及其在自主驾驶的情况下用于语义分割任务的可能性。