Learning to generate new images for a novel category based on only a few images, named as few-shot image generation, has attracted increasing research interest. Several state-of-the-art works have yielded impressive results, but the diversity is still limited. In this work, we propose a novel Delta Generative Adversarial Network (DeltaGAN), which consists of a reconstruction subnetwork and a generation subnetwork. The reconstruction subnetwork captures intra-category transformation, i.e., delta, between same-category pairs. The generation subnetwork generates sample-specific delta for an input image, which is combined with this input image to generate a new image within the same category. Besides, an adversarial delta matching loss is designed to link the above two subnetworks together. Extensive experiments on six benchmark datasets demonstrate the effectiveness of our proposed method. Our code is available at https://github.com/bcmi/DeltaGAN-Few-Shot-Image-Generation.
翻译:仅以几张图像为基础为小说类别制作新图像,称为几张短片的图像生成,引起了越来越多的研究兴趣。一些最先进的作品产生了令人印象深刻的结果,但多样性仍然有限。在这项工作中,我们提议建立一个全新的三角形创世对立网络(DeltaGAN),其中包括一个重建子网络和一个一代子网络。重建子网络捕捉了同一类别对子的类别内部变换,即三角形。生成的子网络生成了一个输入图像的样本特定的三角形,与这一输入图像相结合,在同一类别中生成了新的图像。此外,还设计了将上述两个子网络连接起来的对抗三角形匹配损失。对六个基准数据集的广泛实验显示了我们拟议方法的有效性。我们的代码可在https://github.com/bcmi/DeltaGAN-Few-Shot-Image-Generation上查阅。