Recent pre-trained language models have shown promising capabilities in generating fluent and realistic natural language text. However, generating multi-sentence text with global content planning has been a long-existing research question. Current approaches for controlled text generation can hardly address this issue, as they usually condition on single known control attributes. In this study, we propose a low-cost yet effective framework which explicitly models the global content plan of the generated text. Specifically, it optimizes the joint distribution of the natural language sequence and the global content plan in a plug-and-play manner. We conduct extensive experiments on the well-established Recipe1M+ benchmark. Both automatic and human evaluations verify that our model achieves the state-of-the-art performance on the task of recipe generation
翻译:最近经过培训的语文模式在产生流利和现实的自然语言文本方面显示出了很有希望的能力,然而,产生具有全球内容规划的多语种文本是一个长期存在的研究问题。目前控制下文本生成方法很难解决这一问题,因为它们通常以单一已知的控制属性为条件。在本研究中,我们提出了一个低成本但有效的框架,明确模拟所生成文本的全球内容计划。具体地说,它以插插播方式优化了自然语言序列和全球内容计划的联合分布。我们对既定的Recipe1M+基准进行了广泛的实验。自动和人文评估都证实,我们的模型在制作食谱的任务上取得了最先进的业绩。