Augmenting pre-trained language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks. Although some works have attempted to explain the behavior of such KG-augmented models by indicating which KG inputs are salient (i.e., important for the model's prediction), it is not always clear how these explanations should be used to make the model better. In this paper, we explore whether KG explanations can be used as supervision for teaching these KG-augmented models how to filter out unhelpful KG information. To this end, we propose SalKG, a simple framework for learning from KG explanations of both coarse (Is the KG salient?) and fine (Which parts of the KG are salient?) granularity. Given the explanations generated from a task's training set, SalKG trains KG-augmented models to solve the task by focusing on KG information highlighted by the explanations as salient. Across two popular commonsense QA benchmarks and three KG-augmented models, we find that SalKG's training process can consistently improve model performance.
翻译:以知识图表( KGs) 增强培训前语言模型的工作在各种常识推理任务中取得了成功。 虽然有些作品试图通过指出哪些KG投入是显著的( 对模型预测很重要 ) 来解释这类KG提法模型的行为, 但并不总清楚这些解释应如何用来使模型变得更好。 在本文中, 我们探讨KG解释是否可以用来指导这些KG推理模型如何过滤无助的KG信息。 为此, 我们建议 SalKG, 这是一个简单的框架,用于从KG解释粗略( KGpoint? ) 和细微( KGo 的哪些部分是显著的? ) 中学习KG 解释。 鉴于任务培训组的解释, SalKG 培训KG 将KG 推荐模型用于解决任务, 重点是解释中强调的KGG信息。 在两种流行的 QA 基准和三种KG 放大模型中, 我们发现SalKGG 培训过程能够持续改进业绩 。