Unpaired image-to-image translation is a class of vision problems whose goal is to find the mapping between different image domains using unpaired training data. Cycle-consistency loss is a widely used constraint for such problems. However, due to the strict pixel-level constraint, it cannot perform geometric changes, remove large objects, or ignore irrelevant texture. In this paper, we propose a novel adversarial-consistency loss for image-to-image translation. This loss does not require the translated image to be translated back to be a specific source image but can encourage the translated images to retain important features of the source images and overcome the drawbacks of cycle-consistency loss noted above. Our method achieves state-of-the-art results on three challenging tasks: glasses removal, male-to-female translation, and selfie-to-anime translation.
翻译:未受重视的图像到图像翻译是一系列视觉问题,目标是利用未受重视的培训数据找到不同图像域之间的映像图。周期一致性损失是这类问题广泛使用的制约因素。然而,由于严格的像素级限制,它无法进行几何变化,无法删除大物体,或忽视无关的纹理。在本文中,我们建议对图像到图像翻译进行新的对抗一致性损失。这一损失并不要求翻译图像要被翻译回成为特定的源图像,但可以鼓励翻译图像保留源图像的重要特征,克服上述循环一致性损失的缺陷。我们的方法在三项挑战性任务上取得了最先进的成果:眼镜摘除、男对女翻译和自译自译自审。