Large-scale text-to-image generation models have achieved remarkable progress in synthesizing high-quality, feature-rich images with high resolution guided by texts. However, these models often struggle with novel concepts, eg, new styles, object entities, etc. Although recent attempts have employed fine-tuning or prompt-tuning strategies to teach the pre-trained diffusion model novel concepts from a reference image set,they have the drawback of overfitting to the given reference images, particularly in one-shot applications, which is harmful to generate diverse and high-quality images while maintaining generation controllability. To tackle this challenge, we present a simple yet effective method called DreamArtist, which employs a positive-negative prompt-tuning learning strategy. Specifically, DreamArtist incorporates both positive and negative embeddings and jointly trains them. The positive embedding aggressively captures the salient characteristics of the reference image to drive diversified generation and the negative embedding rectifies inadequacies from the positive embedding. It learns not only what is correct, but also what can be avoided or improved. We have conducted extensive experiments and evaluated the proposed method from image similarity and diversity, generation controllability, and style cloning. And our DreamArtist has achieved a superior generation performance over existing methods. Besides, our additional evaluation on extended tasks, including concept compositions and prompt-guided image editing, demonstrates its effectiveness for more applications.
翻译:大规模文本到图像生成模型在以文本为指导,以高分辨率综合高品质、丰富地物图像方面取得了显著进展。然而,这些模型往往与新概念、例如新风格、新风格、物体实体等进行斗争。虽然最近尝试了微调或快速调整战略,从参考图像集中教授经过预先训练的传播模型新概念,但它们在过度适应特定参考图像方面有缺陷,特别是在一线应用中,这有害于生成多样化和高质量图像,同时保持生成控制能力。为了应对这一挑战,我们提出了一种简单而有效的方法,称为“梦想艺术”,采用了积极、消极的快速校正学习战略。具体地说,“梦想艺术”结合了积极和消极的调整战略,从参考图像集中深入地捕捉了参考图像的显著特征,以驱动多样化的生成,负面嵌入了从积极嵌入的缺陷。我们不仅学到了正确的东西,而且可以避免或改进什么。为了应对这一挑战,我们进行了广泛的实验,并评估了从图像相似性和更迅速调整学习的方法,并增加了我们制作的复制率和复制率,展示了我们制作方法。</s>