One of the most challenging aspects of current single-document news summarization is that the summary often contains 'extrinsic hallucinations', i.e., facts that are not present in the source document, which are often derived via world knowledge. This causes summarization systems to act more like open-ended language models tending to hallucinate facts that are erroneous. In this paper, we mitigate this problem with the help of multiple supplementary resource documents assisting the task. We present a new dataset MiRANews and benchmark existing summarization models. In contrast to multi-document summarization, which addresses multiple events from several source documents, we still aim at generating a summary for a single document. We show via data analysis that it's not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles. An error analysis of generated summaries from pretrained models fine-tuned on MiRANews reveals that this has an even bigger effects on models: assisted summarization reduces 55% of hallucinations when compared to single-document summarization models trained on the main article only. Our code and data are available at https://github.com/XinnuoXu/MiRANews.
翻译:目前单一文件新闻摘要最具挑战性的一个方面是,摘要中往往包含“extrinsic 幻觉”,即源文档中并不包含的事实,这些事实往往是通过世界知识产生的。这导致汇总系统更像开放语言模型,往往产生错误的幻觉。在本文件中,我们借助多个辅助资源文件帮助这项工作,缓解了这一问题。我们提出了一个新的数据集 MiRANews,并基准了现有的汇总模型。与多文档摘要化相比,我们仍打算为一份单一文件生成一个摘要。我们通过数据分析显示,它不仅仅是那些责怪的模型:超过27%的 MiRANews金摘要中提及的事实更基于协助文件,而不是主要来源文章。对经过预先训练的模型对 MiRANews作了微调,对生成的概要进行了错误分析,揭示了对模型的更大影响:与单一文件文件来源文档中的多个事件相比,协助总结减少了55 %的幻觉。我们所培训的MAXMI/Micom主数据模型只有AX格式。