Recent progress in diffusion models has revolutionized the popular technology of text-to-image generation. While existing approaches could produce photorealistic high-resolution images with text conditions, there are still several open problems to be solved, which limits the further improvement of image fidelity and text relevancy. In this paper, we propose ERNIE-ViLG 2.0, a large-scale Chinese text-to-image diffusion model, to progressively upgrade the quality of generated images by: (1) incorporating fine-grained textual and visual knowledge of key elements in the scene, and (2) utilizing different denoising experts at different denoising stages. With the proposed mechanisms, ERNIE-ViLG 2.0 not only achieves a new state-of-the-art on MS-COCO with zero-shot FID score of 6.75, but also significantly outperforms recent models in terms of image fidelity and image-text alignment, with side-by-side human evaluation on the bilingual prompt set ViLG-300.
翻译:最近扩散模型的进展彻底改变了文本到图像生成的热门技术。尽管现有方法可以在文本条件下生成具有照片般逼真度和高分辨率的图像,但仍存在一些待解决的问题,这限制了图像保真度和文本相关性的进一步提高。在本文中,我们提出了 ERNIE-ViLG 2.0,一种大规模的中文文本到图像扩散模型,通过以下方式逐步提高所生成图像的质量:(1) 加入场景中关键元素的细粒度文本和视觉知识,以及 (2) 在不同的去噪阶段利用不同的去噪专家。通过该机制,ERNIE-ViLG 2.0不仅在 MS-COCO 上实现了新的最优表现,零样本 FID 得分为 6.75,而且在图像保真度和图像-文本对齐方面显著优于最近的模型,在双语提示集 ViLG-300 上进行并排人工评估。