Contrastive learning shows great potential in unpaired image-to-image translation, but sometimes the translated results are in poor quality and the contents are not preserved consistently. In this paper, we uncover that the negative examples play a critical role in the performance of contrastive learning for image translation. The negative examples in previous methods are randomly sampled from the patches of different positions in the source image, which are not effective to push the positive examples close to the query examples. To address this issue, we present instance-wise hard Negative Example Generation for Contrastive learning in Unpaired image-to-image Translation~(NEGCUT). Specifically, we train a generator to produce negative examples online. The generator is novel from two perspectives: 1) it is instance-wise which means that the generated examples are based on the input image, and 2) it can generate hard negative examples since it is trained with an adversarial loss. With the generator, the performance of unpaired image-to-image translation is significantly improved. Experiments on three benchmark datasets demonstrate that the proposed NEGCUT framework achieves state-of-the-art performance compared to previous methods.
翻译:对比性学习显示未受重视的图像到图像翻译的巨大潜力, 但有时翻译结果质量差, 内容没有一致保存。 在本文中, 我们发现负面实例在图像翻译的对比性学习中起着关键作用。 先前方法中的负面实例是从源图像中不同位置的片段随机抽样的, 无法有效地推近与查询示例相近的积极实例。 为了解决这个问题, 我们提出了在未受重视的图像到图像翻译~ (NEGCUT) 中进行对比性学习的实例 -- -- 硬性负性生成。 具体地说, 我们训练一个生成者在网上生成负面实例。 生成者有两个新颖的视角:1) 这是实例, 这意味着生成的示例以输入图像为基础, 2) 它可以产生硬的负面实例, 因为它受过对抗性损失的训练。 随着生成者, 未受控的图像到模拟翻译的性能得到显著改善。 在三个基准数据集上进行的实验表明, 拟议的 NEGCUT 框架实现了与先前方法相比较的状态性能 。