Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs, which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure. Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
翻译:知识图表(KGs)帮助神经模型改进了各种知识密集型任务的业绩,如问答和项目建议等。通过对KG的注意,这类KG推介模型也可以“解释”KG信息对于作出特定预测最为相关。在本文中,我们质疑这些模型是否真正如我们所期望的那样行事。我们通过强化学习政策(甚至简单的超自然学),表明人们能够产生欺骗性的干扰KG,它保持原KG的下游性能,同时大大偏离原KG的语义学和结构。我们的调查结果使人怀疑KG推算模型是否有能力解释KG信息并做出合理解释。