Graph Neural Network (GNN) has achieved state-of-the-art performance in various high-stake prediction tasks, but multiple layers of aggregations on graphs with irregular structures make GNN a less interpretable model. Prior methods use simpler subgraphs to simulate the full model, or counterfactuals to identify the causes of a prediction. The two families of approaches aim at two distinct objectives, "simulatability" and "counterfactual relevance", but it is not clear how the objectives can jointly influence the human understanding of an explanation. We design a user study to investigate such joint effects and use the findings to design a multi-objective optimization (MOO) algorithm to find Pareto optimal explanations that are well-balanced in simulatability and counterfactual. Since the target model can be of any GNN variants and may not be accessible due to privacy concerns, we design a search algorithm using zeroth-order information without accessing the architecture and parameters of the target model. Quantitative experiments on nine graphs from four applications demonstrate that the Pareto efficient explanations dominate single-objective baselines that use first-order continuous optimization or discrete combinatorial search. The explanations are further evaluated in robustness and sensitivity to show their capability of revealing convincing causes while being cautious about the possible confounders. The diverse dominating counterfactuals can certify the feasibility of algorithmic recourse, that can potentially promote algorithmic fairness where humans are participating in the decision-making using GNN.
翻译:内建网络( GNN) 在各种高取量的预测任务中取得了最先进的表现, 但是在有非正常结构的图表上, 多个层次的聚合群使得GNN成为一个不易解释的模式。 先前的方法使用更简单的子集模拟全模型, 或反事实来确定预测的原因。 两种方法的组合的目标是两个截然不同的目标, “ 兼容性” 和“ 对抗现实相关性 ”, 但不清楚目标如何共同影响人类对解释的理解。 我们设计了一个用户研究, 调查这种联合效应, 并使用结果设计一个多目标的公平性(MOO)算法, 以找到在可同时在可同时精确和反效果方面平衡的最佳解释。 由于目标模型可以是任何GNN变量,而且由于隐私问题可能无法进入, 我们设计了一个使用零顺序信息的搜索算法, 而没有访问目标模型的架构和参数。 从四个应用的九张图的量化实验表明, Pareto 能够进一步控制单一目标基线, 并且使用第一个层次的不断优化或离动的算算法, 从而显示可能进行可靠的精确的精确的精确的精确的精确度解释。