There are many recent research efforts to fine-tune a pre-trained generator with a few target images to generate images of a novel domain. Unfortunately, these methods often suffer from overfitting or under-fitting when fine-tuned with a single target image. To address this, here we present a novel single-shot GAN adaptation method through unified CLIP space manipulations. Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators. To further improve the adapted model to produce spatially consistent samples with respect to the source generator, we also propose contrastive regularization for patchwise relationships in the CLIP space. Experimental results show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively. Furthermore, we show that our CLIP space manipulation strategy allows more effective attribute editing.
翻译:最近有许多研究努力来微调受过训练的发电机,用几个目标图像制作新领域图像。 不幸的是,这些方法在对单一目标图像进行微调时,往往被过度装配或不完善。 为了解决这个问题,我们在这里通过统一的 CLIP 空间操控,展示了一种新的单发GAN适应方法。 具体地说,我们的模型使用两步培训战略:使用CLIP 引导的潜伏优化,在源生成器中进行参考图像搜索,然后用新的损失功能进行微调,使源与经调整的发电机之间具有CLIP空间一致性。为了进一步改进经调整的模型,以生成与源生成器空间一致的样本,我们还提议对CLIP 空间的匹配关系进行对比性规范。实验结果表明,我们的模型产生不同的结果与目标质和量的质的质和量的质的模型相比,超越基线模型。 此外,我们显示,我们的CLIP 空间操纵策略允许更有效的属性编辑。