Image-to-image translation (i2i) networks suffer from entanglement effects in presence of physics-related phenomena in target domain (such as occlusions, fog, etc), thus lowering the translation quality and variability. In this paper, we present a comprehensive method for disentangling physics-based traits in the translation, guiding the learning process with neural or physical models. For the latter, we integrate adversarial estimation and genetic algorithms to correctly achieve disentanglement. The results show our approach dramatically increase performances in many challenging scenarios for image translation.
翻译:图像到图像翻译 (i2i) 网络在目标领域(例如隔离、雾等) 物理相关现象存在的情况下受到缠绕效应的影响,从而降低了翻译质量和可变性。 在本文中,我们提出了一个在翻译中分离基于物理特征的全面方法,指导与神经或物理模型的学习过程。对于后者,我们结合了对抗估计和遗传算法,以正确实现分离。结果显示我们的方法极大地提高了许多具有挑战性的图像翻译情景的性能。