In this paper, we introduce a novel dIffusion language modEl pre-training framework for text generation, which we call GENIE. GENIE is a large-scale pretrained diffusion language model that consists of an encoder and a diffusion-based decoder, which can generate text by gradually transforming a random noise sequence into a coherent text sequence. To pre-train GENIE on a large-scale language corpus, we design a new continuous paragraph denoise objective, which encourages the diffusion-decoder to reconstruct a clean text paragraph from a corrupted version, while preserving the semantic and syntactic coherence. We evaluate GENIE on four downstream text generation benchmarks, namely XSum, CNN/DailyMail, Gigaword, and CommonGen. Our experimental results show that GENIE achieves comparable performance with the state-of-the-art autoregressive models on these benchmarks, and generates more diverse text samples. The code and models of GENIE are available at https://github.com/microsoft/ProphetNet/tree/master/GENIE.
翻译:在本文中,我们为文本的生成引入了一个新颖的融合语言模版培训前框架,我们称之为GENIE。GENIE是一个大型的预先培训的传播语言模型,由编码器和基于扩散的解码器组成,通过将随机噪声序列逐渐转换成一致的文本序列,可以产生文本。在大规模语言保护中,我们设计了一个新的连续段落隐蔽目标,鼓励扩散-解码器从腐败版本中重建干净的文本段落,同时保持语义和合成的一致性。我们根据四个下游文本生成基准,即XSum、CNN/DailyMail、Gigaworde和CommonGen,对GENIE进行了评估。我们的实验结果表明,GENIE取得了与关于这些基准的先进自动回归模型的类似性能,并生成了更加多样化的文本样本。GENIE的代码和模型见https://github.com/micromaoft/ProphetNet/tree/gree/GENIE。