Learning to generate new images for a novel category based on only a few images, named as few-shot image generation, has attracted increasing research interest. Several state-of-the-art works have yielded impressive results, but the diversity is still limited. In this work, we propose a novel Delta Generative Adversarial Network (DeltaGAN), which consists of a reconstruction subnetwork and a generation subnetwork. The reconstruction subnetwork captures intra-category transformation, i.e., "delta", between same-category pairs. The generation subnetwork generates sample-specific "delta" for an input image, which is combined with this input image to generate a new image within the same category. Besides, an adversarial delta matching loss is designed to link the above two subnetworks together. Extensive experiments on five few-shot image datasets demonstrate the effectiveness of our proposed method.
翻译:仅以几张图像为基础为小类创造新图像,称为几张短片图像生成,引起了越来越多的研究兴趣。一些最先进的作品产生了令人印象深刻的结果,但多样性仍然有限。在这项工作中,我们提议建立一个新型的三角形创世对立网络(DeltaGAN),由重建子网络和一代子网络组成。重建子网络捕捉了同一类对子的类别内部变换,即“delta”。生成子网络为输入图像生成样本特定的“delta ”, 与这种输入图像相结合,在同一类中生成新的图像。此外,还设计了将上述两个子网络连接起来的对立三角形匹配损失。对五张几张图像数据集的广泛实验显示了我们拟议方法的有效性。