Knowledge graphs (KGs) have helped neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure. Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
翻译:知识图形( KGs) 帮助神经- 同步模型改进了各种知识密集型任务的业绩, 如问答和项目建议。 通过对 KG 的注意, 这些模型还可以“ 解释” KG 信息对于作出特定预测最为相关。 在本文中, 我们质疑这些模型是否真的如我们所期望的那样表现良好。 我们证明, 通过强化学习政策( 甚至是简单的经济理论), 人们可以产生欺骗性的对KG, 从而维持原KG 下游的性能, 同时大大偏离原始的语义学和结构。 我们的发现使人怀疑 KG 强化模型是否有能力利用 KG 信息并提供可信的解释 。