As Graph Neural Networks (GNNs) are widely adopted in digital pathology, there is increasing attention to developing explanation models (explainers) of GNNs for improved transparency in clinical decisions. Existing explainers discover an explanatory subgraph relevant to the prediction. However, such a subgraph is insufficient to reveal all the critical biological substructures for the prediction because the prediction will remain unchanged after removing that subgraph. Hence, an explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions for the explanation. Such explanation requires a measurement of information transferred from different input subgraphs to the predictive output, which we define as information flow. In this work, we address these key challenges and propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs. To evaluate the information flow within GNN's prediction, we first propose a novel notion of predictiveness, named $f$-information, which is directional and incorporates the realistic capacity of the GNN model. Based on it, IFEXPLAINER generates the explanatory subgraph with maximal information flow to the prediction. Meanwhile, it minimizes the information flow from the input to the predictive result after removing the explanation. Thus, the produced explanation is necessarily important to the prediction and sufficient to reveal the most crucial substructures. We evaluate IFEXPLAINER to interpret GNN's predictions on breast cancer subtyping. Experimental results on the BRACS dataset show the superior performance of the proposed method.
翻译:由于数字病理学广泛采用图形神经网络(GNNs),因此人们越来越注意开发GNNs的解释模型(解释者),以提高临床决策的透明度。现有的解释者发现了与预测相关的解释性子集。然而,这种子集不足以揭示预测所需的所有关键生物子集结构,因为预测在删除该子集后将保持不变。因此,解释性子集不仅对预测来说必要,而且足以发现最可预测的解释性区域。这种解释需要测量从不同输入子集到预测性产出的信息,而我们将其定义为信息流。在这项工作中,我们处理这些关键挑战,并提出IFEXPLAINER,为GNNS带来必要和充分的解释。为了评估GNN的预测性能,我们首先提出了一个新的预测性概念,即$-信息,这是方向性的,并包含GNNM模型的现实性能。基于这一解释,IEXPLAER用最精确的信息流到预测性能,我们处理这些关键性的预测性能。同时,我们将预测性能流到最精确的预测性结果。我们所制作的BNNER的亚数据到最精确的预测性解释。