Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control. While previous studies attempt to provide different types of guidance to control the output and increase faithfulness, it is not clear how these strategies compare and contrast to each other. In this paper, we propose a general and extensible guided summarization framework (GSum) that can effectively take different kinds of external guidance as input, and we perform experiments across several different varieties. Experiments demonstrate that this model is effective, achieving state-of-the-art performance according to ROUGE on 4 popular summarization datasets when using highlighted sentences as guidance. In addition, we show that our guided model can generate more faithful summaries and demonstrate how different types of guidance generate qualitatively different summaries, lending a degree of controllability to the learned models.
翻译:虽然先前的研究试图提供不同类型的指导以控制产出和增强忠诚度,但不清楚这些战略如何相互比较和对比。在本文中,我们提出了一个一般和可推广的指导性总结框架(GSum),可以有效地将不同种类的外部指导作为投入,我们在不同品种中进行实验。实验表明,这一模式是有效的,在使用突出的句子作为指导时,根据ROUGE关于4个大众汇总数据集的数据,实现了最先进的性能。此外,我们表明,我们的指导性模型可以产生更准确的总结,并展示不同类型的指导如何产生质量不同的总结,为学习的模型提供一定程度的控制。