Transformer-based language model approaches to automated story generation currently provide state-of-the-art results. However, they still suffer from plot incoherence when generating narratives over time, and critically lack basic commonsense reasoning. Furthermore, existing methods generally focus only on single-character stories, or fail to track characters at all. To improve the coherence of generated narratives and to expand the scope of character-centric narrative generation, we introduce Commonsense-inference Augmented neural StoryTelling (CAST), a framework for introducing commonsense reasoning into the generation process while modeling the interaction between multiple characters. We find that our CAST method produces significantly more coherent and on-topic two-character stories, outperforming baselines in dimensions including plot plausibility and staying on topic. We also show how the CAST method can be used to further train language models that generate more coherent stories and reduce computation cost.
翻译:以变换语言为基础的自动化故事生成模式目前提供了最新的结果。然而,在一段时间内生成描述时,它们仍然受到图谋不一致的影响,而且严重缺乏基本的常识推理。此外,现有方法一般只侧重于单字符故事,或者根本没有跟踪字符。为了提高生成的叙事的一致性和扩大以性为中心的叙事生成的范围,我们引入了常识-推论增强神经叙事教学(CAST),这是一个框架,用于将常识推理引入生成过程,同时建模多个字符之间的互动。我们发现,我们的CAST方法生成了更加一致的和在专题上的双字符故事,在包括图案可言和专题上保持的基线上优异。我们还展示了如何使用CAST方法来进一步培训语言模型,从而产生更加一致的故事并降低计算成本。