Stories are diverse and highly personalized, resulting in a large possible output space for story generation. Existing end-to-end approaches produce monotonous stories because they are limited to the vocabulary and knowledge in a single training dataset. This paper introduces KG-Story, a three-stage framework that allows the story generation model to take advantage of external Knowledge Graphs to produce interesting stories. KG-Story distills a set of representative words from the input prompts, enriches the word set by using external knowledge graphs, and finally generates stories based on the enriched word set. This distill-enrich-generate framework allows the use of external resources not only for the enrichment phase, but also for the distillation and generation phases. In this paper, we show the superiority of KG-Story for visual storytelling, where the input prompt is a sequence of five photos and the output is a short story. Per the human ranking evaluation, stories generated by KG-Story are on average ranked better than that of the state-of-the-art systems. Our code and output stories are available at https://github.com/zychen423/KE-VIST.
翻译:故事多种多样,而且高度个性化,因此故事生成的输出空间可能很大。 现有的端到端方法产生单一故事, 因为它们局限于单培训数据集中的词汇和知识。 本文介绍了KG- Story, 这个三阶段框架允许故事生成模型利用外部知识图来生成有趣的故事。 KG- Story 从输入提示中提取了一组具有代表性的单词, 通过使用外部知识图表丰富了该单词, 并最终根据强化的词集生成了故事。 这个蒸馏浓缩基因框架允许使用外部资源, 不仅用于浓缩阶段, 而且还用于蒸馏和生成阶段。 在本文中, 我们展示了 KG- Story 的优势, 用于视觉故事讲述, 输入提示是五张照片的序列, 输出是短故事。 根据人类排名评价, KG- Story 生成的故事的平均排名优于国家艺术系统。 我们的代码和输出故事可以在 https://github. com/zy42Kchen.