Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts.
翻译:最新文本到图像的基因变异模型已经展示出在目标文本提示下产生多样化和创造性图像的无与伦比的能力。 虽然革命性、目前最先进的传播模型可能仍然未能产生能够充分传达给给定文本的语义的图像。 我们分析了公开提供的稳定的传播模型,并评估了灾难性忽视的存在,因为模型未能从输入的提示中产生一个或多个主题。 此外,我们发现,在某些情况下,模型也未能正确地将属性(如颜色)与相应的主题(如颜色)捆绑在一起。为了帮助减轻这些失败案例,我们引入了“基因语义护理”(GSN)概念,我们试图在推论期间在飞跃中干预感化过程,以提高生成图像的忠实性。我们利用基于注意的GN、受托出场和Excite的配方,指导模型改进跨关注单位,使其能处理文本中的所有标语(如颜色),并且加强—或者外推—它们的激活,鼓励模型生成文本中所描述的所有主题。我们在整个文本中都比较了一种迅速的方法。