Explainability is crucial for probing graph neural networks (GNNs), answering questions like "Why the GNN model makes a certain prediction?". Feature attribution is a prevalent technique of highlighting the explanatory subgraph in the input graph, which plausibly leads the GNN model to make its prediction. Various attribution methods exploit gradient-like or attention scores as the attributions of edges, then select the salient edges with top attribution scores as the explanation. However, most of these works make an untenable assumption - the selected edges are linearly independent - thus leaving the dependencies among edges largely unexplored, especially their coalition effect. We demonstrate unambiguous drawbacks of this assumption - making the explanatory subgraph unfaithful and verbose. To address this challenge, we propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer). It frames the explanation task as a sequential decision process - an explanatory subgraph is successively constructed by adding a salient edge to connect the previously selected subgraph. Technically, its policy network predicts the action of edge addition, and gets a reward that quantifies the action's causal effect on the prediction. Such reward accounts for the dependency of the newly-added edge and the previously-added edges, thus reflecting whether they collaborate together and form a coalition to pursue better explanations. As such, RC-Explainer is able to generate faithful and concise explanations, and has a better generalization power to unseen graphs. When explaining different GNNs on three graph classification datasets, RC-Explainer achieves better or comparable performance to SOTA approaches w.r.t. predictive accuracy and contrastivity, and safely passes sanity checks and visual inspections. Codes are available at https://github.com/xiangwang1223/reinforced_causal_explainer.
翻译:解释性对于测算图形神经网络(GNNs)来说至关重要, 回答“ GNN模式为什么能做出某种预测” 这样的问题。 特性归属是一种在输入图中突出解释性子集的常用技术, 它可以令人信服地引导 GNN 模型作出预测。 各种特性归属方法利用梯度或注意力分数作为边缘属性, 然后选择带有顶级属性分数的突出边缘作为解释。 然而, 大部分这些作品使得一个站不住脚的假设 — 所选边缘是线性的, 从而使得边缘之间的依赖性大体上没有探索性, 特别是它们的联盟效应效果。 我们展示了这一假设的明确的缺陷 - 使解释性子集成不忠实和动性。 为了应对这一挑战, 我们提议了一个强化性学习剂, 强化性Causal 解释性(RC-Explainer) 。 将解释性分级任务设置为顺序决策程序 - 解释性分数的连续构建, 方法是增加一个明显的精度, 将精度的精度分数连接起来。 技术上, 其政策网络预测边端的特性分类预测作用, 直径直径直径直直判, 并获得更精确的精确的精确的精确的精确的分解, 和直径直判解释性解释性解释性解释。