The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.
翻译:基于变压器的文本到图像模型的开发受到其缓慢生成和复杂度高分辨率图像的阻碍。 在这项工作中,我们提出了一个基于等级变压器和本地平行自动递减生成的解决方案。我们预设了一个6B参数变压器,该变压器具有简单灵活的自我监督任务,即跨模式通用语言模型(CogLM),并微调它用于快速超分辨率。新的文本到图像系统(CogView2)显示,与同时运行的高级DALL-E-2相比,其生成非常有竞争力,并自然支持对图像进行互动式文本指导编辑。