Story Ending Generation (SEG) is a challenging task in natural language generation. Recently, methods based on Pre-trained Language Models (PLM) have achieved great prosperity, which can produce fluent and coherent story endings. However, the pre-training objective of PLM-based methods is unable to model the consistency between story context and ending. The goal of this paper is to adopt contrastive learning to generate endings more consistent with story context, while there are two main challenges in contrastive learning of SEG. First is the negative sampling of wrong endings inconsistent with story contexts. The second challenge is the adaptation of contrastive learning for SEG. To address these two issues, we propose a novel Contrastive Learning framework for Story Ending Generation (CLSEG), which has two steps: multi-aspect sampling and story-specific contrastive learning. Particularly, for the first issue, we utilize novel multi-aspect sampling mechanisms to obtain wrong endings considering the consistency of order, causality, and sentiment. To solve the second issue, we well-design a story-specific contrastive training strategy that is adapted for SEG. Experiments show that CLSEG outperforms baselines and can produce story endings with stronger consistency and rationality.
翻译:故事终结一代(SEG)是自然语言一代中一项具有挑战性的任务。最近,基于培训前语言模型(PLM)的方法已经实现了巨大的繁荣,能够产生流畅和连贯的故事结局。然而,基于PLM方法的培训前目标无法模拟故事背景和结局之间的一致性。本文的目标是通过对比学习,产生与故事背景更加一致的结局,而SEG的对比学习则面临两大挑战。首先,对与故事背景不相符的错误结局进行负面抽样调查。第二个挑战是为SEG调整对比学习。为了解决这两个问题,我们提出了一个新的故事终结一代对比学习框架(CLSEG),它有两个步骤:多层抽样和针对具体故事的对比学习。特别是就第一个问题而言,我们利用新的多层抽样机制来获得错误的结局,考虑秩序、因果关系和情绪的一致性。为了解决第二个问题,我们精心设计了一个针对不同故事的对比培训战略,以适应SEGG。实验显示CLS能够以更强烈的理性为基础完成CLS的基线和演算。