Automated storytelling has long captured the attention of researchers for the ubiquity of narratives in everyday life. However, it is challenging to maintain coherence and stay on-topic toward a specific ending when generating narratives with neural language models. In this paper, we introduce Story generation with Reader Models (StoRM), a framework in which a reader model is used to reason about the story should progress. A reader model infers what a human reader believes about the concepts, entities, and relations about the fictional story world. We show how an explicit reader model represented as a knowledge graph affords story coherence and provides controllability in the form of achieving a given story world state goal. Experiments show that our model produces significantly more coherent and on-topic stories, outperforming baselines in dimensions including plot plausibility and staying on topic. Our system also outperforms outline-guided story generation baselines in composing given concepts without ordering.
翻译:自动讲故事长期以来一直引起研究人员对日常生活中故事描述无处不在的注意。 但是,在产生神经语言模型的叙述时,要保持一致性并保持主题性,以特定结局为目标,这具有挑战性。 在本文中,我们引入了阅读者模型(StoRM)的故事代言(StoRM)这个框架,在这个框架中,读者模型用来解释故事应该有所进展。一个读者模型推断了人类读者对虚构故事世界的概念、实体和关系的看法。我们展示了一个明确的读者模型如何以知识图表的形式代表故事的一致性,并提供了实现特定故事世界状态目标的可控性。实验显示,我们的模型产生了更加一致和专题性的故事,在范围上超越了实用基线,包括绘图的可容性和在主题上坚持。我们的系统还超越了在没有命令的情况下拼写给定概念的大纲指导的故事生成基线。