Generative commonsense reasoning which aims to empower machines to generate sentences with the capacity of reasoning over a set of concepts is a critical bottleneck for text generation. Even the state-of-the-art pre-trained language generation models struggle at this task and often produce implausible and anomalous sentences. One reason is that they rarely consider incorporating the knowledge graph which can provide rich relational information among the commonsense concepts. To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graphaugmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output. Moreover, KG-BART can leverage the graph attention to aggregate the rich concept semantics that enhances the model generalization on unseen concept sets. Experiments on benchmark CommonGen dataset verify the effectiveness of our proposed approach by comparing with several strong pre-trained language generation models, particularly KG-BART outperforms BART by 15.98%, 17.49%, in terms of BLEU-3, 4. Moreover, we also show that the generated context by our model can work as background scenarios to benefit downstream commonsense QA tasks.
翻译:一个原因是,它们很少考虑纳入知识图,以提供大量共同概念概念中的关联信息。为了提高对生成文本进行常识推理的能力,我们提议了一个新的知识图解预师语言生成模型KG-BART,该模型通过知识图表包含各种概念的复杂关系,并生成更符合逻辑和自然的句子。此外,KG-BART还可以利用图解的注意力来汇总丰富的概念语义,以强化对看不见概念集的模型集成。关于共同基因数据集基准的实验通过将若干经过预先训练的强力语言生成模型,特别是KG-BART以15.98 %、17.49%的BART外形模型与BLEU-34号模型相比较,我们还可以显示通过共同背景的下游工作产生的背景,从而验证我们拟议方法的有效性。