Explainability of Graph Neural Networks (GNNs) is critical to various GNN applications but remains an open challenge. A convincing explanation should be both necessary and sufficient simultaneously. However, existing GNN explaining approaches focus on only one of the two aspects, necessity or sufficiency, or a trade-off between the two. To search for the most necessary and sufficient explanation, the Probability of Necessity and Sufficiency (PNS) can be applied since it can mathematically quantify the necessity and sufficiency of an explanation. Nevertheless, the difficulty of obtaining PNS due to non-monotonicity and the challenge of counterfactual estimation limits its wide use. To address the non-identifiability of PNS, we resort to a lower bound of PNS that can be optimized via counterfactual estimation, and propose Necessary and Sufficient Explanation for GNN (NSEG) via optimizing that lower bound. Specifically, we employ nearest neighbor matching to generate counterfactual samples for the features, which is different from the random perturbation. In particular, NSEG combines the edges and node features to generate an explanation, where the common edge explanation is a special case of the combined explanation. Empirical study shows that NSEG achieves excellent performance in generating the most necessary and sufficient explanations among a series of state-of-the-art methods.
翻译:图形神经网络(GNNs)的可解释性对于各种GNN应用程序来说至关重要,但仍然是一项公开的挑战。令人信服的解释应该是既必要又充分,同时应该充分。然而,现有的GNN解释方法只注重两个方面中的一个方面,即必要性或充足性,或两者之间的权衡。为了寻找最必要和最充分的解释,可以适用“即时和自足性”的概率,因为它可以在数学上量化解释的必要性和充分性。然而,由于非认知性和反事实估计的挑战,很难获得PNS,这限制了它的广泛使用。为了解决PNS的不可识别性,我们诉诸于较低的PNS约束,通过反事实估计可以优化PNS的制约,并提议通过优化较低约束为GNN(NSG)提出必要和充分的解释。具体地说,我们使用最近的邻居匹配来产生反事实的特征样本,这不同于随机的扰动性。具体地说,NSEG将优势和点特征结合起来,从而产生一个解释,其中最常见的空间解释。