Image-to-image (I2I) translation methods based on generative adversarial networks (GANs) typically suffer from overfitting when limited training data is available. In this work, we propose a data augmentation method (ReMix) to tackle this issue. We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples. The generator learns to translate the in-between samples rather than memorizing the training set, and thereby forces the discriminator to generalize. The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results. The ReMix method can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ReMix method achieve significant improvements.
翻译:基于基因对抗网络(GANs)的图像到图像(I2I)翻译方法通常在培训数据有限时会过于完善,在这项工作中,我们提议了一种数据增强方法(ReMix)来解决这一问题。我们根据样本之间的感知关系,在地物层面对培训样本进行相互交流,并提出新的内容损失。生成器学会在样本之间翻译,而不是对培训集进行记忆化,从而迫使歧视者进行概括化。拟议方法有效地减少了生成的模糊性,并产生了内容保留结果。ReMix方法可以很容易地纳入现有的GAN模型,并稍作修改。许多任务的实验结果表明,配有 ReMix 方法的GAN模型取得了显著的改进。