Text Summarization is recognised as one of the NLP downstream tasks and it has been extensively investigated in recent years. It can assist people with perceiving the information rapidly from the Internet, including news articles, social posts, videos, etc. Most existing research works attempt to develop summarization models to produce a better output. However, advent limitations of most existing models emerge, including unfaithfulness and factual errors. In this paper, we propose a novel model, named as Knowledge-aware Abstractive Text Summarization, which leverages the advantages offered by Knowledge Graph to enhance the standard Seq2Seq model. On top of that, the Knowledge Graph triplets are extracted from the source text and utilised to provide keywords with relational information, producing coherent and factually errorless summaries. We conduct extensive experiments by using real-world data sets. The results reveal that the proposed framework can effectively utilise the information from Knowledge Graph and significantly reduce the factual errors in the summary.
翻译:文本摘要被公认为国家劳工政策下游任务之一,近年来已经对此进行了广泛调查。它可以帮助人们从互联网上快速了解信息,包括新闻文章、社交文章、视频等。大多数现有研究工作都试图开发汇总模型,以产生更好的产出。然而,大多数现有模型的出现局限性,包括不忠和事实错误。在本文中,我们提出了一个新颖的模型,名为“认知知识摘要文本摘要”,利用知识图提供的优势加强标准Seq2Seq模型。此外,知识图表三重图还从源文本中提取,用于提供相关信息的关键词,生成一致和无实际错误的摘要。我们通过使用现实世界数据集进行广泛的实验。结果显示,拟议的框架能够有效地利用知识图中的信息,并大大减少摘要中的事实错误。