Open-ended text generation tasks, such as dialogue generation and story completion, require models to generate a coherent continuation given limited preceding context. The open-ended nature of these tasks brings new challenges to the neural auto-regressive text generators nowadays. Despite these neural models are good at producing human-like text, it is difficult for them to arrange causalities and relations between given facts and possible ensuing events. To bridge this gap, we propose a novel two-stage method which explicitly arranges the ensuing events in open-ended text generation. Our approach can be understood as a specially-trained coarse-to-fine algorithm, where an event transition planner provides a "coarse" plot skeleton and a text generator in the second stage refines the skeleton. Experiments on two open-ended text generation tasks demonstrate that our proposed method effectively improves the quality of the generated text, especially in coherence and diversity. The code is available at: \url{https://github.com/qtli/EventPlanforTextGen}.
翻译:诸如对话生成和故事完成等不限名额的文本生成任务要求模型在有限的前背景下产生连贯的延续性。这些任务的开放性为神经自动递减文本生成者带来了新的挑战。 尽管这些神经模型在生成像人一样的文本方面是好的,但它们很难在给定的事实和随后可能发生的事件之间安排因果关系。为了缩小这一差距,我们提议了一个新的两阶段方法,明确安排随后在不限名额文本生成中发生的事件。我们的方法可以被理解为一种经过特殊训练的粗皮算法,其中事件过渡规划员在第二阶段提供“粗皮”绘图骨架和文本生成器,对骨架进行完善。关于两个不限名额的文本生成任务的实验表明,我们所提议的方法有效地提高了生成文本的质量,特别是在一致性和多样性方面。代码可在以下查阅:\url{https://github.com/qtli/EventPlanforTextGen}。