Recent advancements in large scale text-to-image models have opened new possibilities for guiding the creation of images through human-devised natural language. However, while prior literature has primarily focused on the generation of individual images, it is essential to consider the capability of these models to ensure coherency within a sequence of images to fulfill the demands of real-world applications such as storytelling. To address this, here we present a novel neural pipeline for generating a coherent storybook from the plain text of a story. Specifically, we leverage a combination of a pre-trained Large Language Model and a text-guided Latent Diffusion Model to generate coherent images. While previous story synthesis frameworks typically require a large-scale text-to-image model trained on expensive image-caption pairs to maintain the coherency, we employ simple textual inversion techniques along with detector-based semantic image editing which allows zero-shot generation of the coherent storybook. Experimental results show that our proposed method outperforms state-of-the-art image editing baselines.
翻译:最近大规模文本到图像模型的进展为指导通过人类自定义的自然语言制作图像开辟了新的可能性。 但是,虽然先前的文献主要侧重于个人图像的生成,但必须考虑这些模型的能力,以确保在一系列图像中的一致性,以满足真实世界应用程序的要求,如故事叙事等。为了解决这个问题,我们在这里展示了一个新的神经管道,以便从一个故事的简单文本中生成一个连贯的故事书。具体地说,我们利用一个经过培训的大型语言模型和一个文本引导的中文本传播模型的组合来生成一致的图像。虽然以往的故事合成框架通常需要一种大型文本到图像模型,在昂贵的图像成像配对方面受过培训,以保持一致性,但我们使用简单的文本转换技术,同时使用基于探测器的语义图像编辑,从而能够零光谱生成连贯的故事书。实验结果显示,我们所提议的方法超过了最新图像编辑基线。