Generative commonsense reasoning which aims to empower machines to generate sentences with the capacity of reasoning over a set of concepts is a critical bottleneck for text generation. Even the state-of-the-art pre-trained language generation models struggle at this task and often produce implausible and anomalous sentences. One reason is that they rarely consider incorporating the knowledge graph which can provide rich relational information among the commonsense concepts. To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graph augmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output. Moreover, KG-BART can leverage the graph attention to aggregate the rich concept semantics that enhances the model generalization on unseen concept sets. Experiments on benchmark CommonGen dataset verify the effectiveness of our proposed approach by comparing with several strong pre-trained language generation models, particularly KG-BART outperforms BART by 5.80, 4.60, in terms of BLEU-3, 4. Moreover, we also show that the generated context by our model can work as background scenarios to benefit downstream commonsense QA tasks.
翻译:一个原因是,它们很少考虑纳入知识图,以提供丰富的关联信息,从而在常识概念中提供丰富的关联信息。为了提高生成文本的常识推理能力,我们提议了一个新的知识图,以强化预先培训的语言生成模型KG-BART,该图通过知识图包含各种概念的复杂关系,并生成更符合逻辑和自然的句子作为产出。此外,KG-BART还可以利用图表的注意,将丰富的概念语义汇总起来,从而增强对未知概念集的模型集成。关于共同Gen数据集的基准实验通过比较若干经过培训的强力语言生成模型,特别是KG-BART在BLEU-3中将BART到5.80,4.60, 以及BLEU-3中将BART比成BART。 此外,我们还表明,我们模型产生的背景任务可以作为背景情景情景,作为共同背景情景,用以验证我们拟议方法的有效性。