Despite advances in generating fluent texts, existing pretraining models tend to attach incoherent event sequences to involved entities when generating narratives such as stories and news. We conjecture that such issues result from representing entities as static embeddings of superficial words, while neglecting to model their ever-changing states, i.e., the information they carry, as the text unfolds. Therefore, we extend the Transformer model to dynamically conduct entity state updates and sentence realization for narrative generation. We propose a contrastive framework to learn the state representations in a discrete space, and insert additional attention layers into the decoder to better exploit these states. Experiments on two narrative datasets show that our model can generate more coherent and diverse narratives than strong baselines with the guidance of meaningful entity states.
翻译:尽管在生成流利文本方面取得了进展,但现有的培训前模式往往在编造故事和新闻等叙事时向相关实体附上不连贯的事件顺序。 我们推测,这类问题产生于将实体作为肤浅文字静态嵌入的实体,而忽视了其不断变化的状态的模型,即随着文字的展开,它们所包含的信息。因此,我们将变换模型扩大到动态地进行实体国家更新和叙事生成的句子实现。我们提出了一个对比式框架,以了解在离散空间的国家表述,并在解码器中添加更多的关注层,以便更好地利用这些状态。对两个叙述数据集的实验表明,在有意义的实体国家的指导下,我们的模型能够产生比强的基线更加一致和多样的描述。