Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
翻译:图表神经网络(GNNs)显示,图形数据预测性能显著提高,同时,这些模型的预测往往难以解释,在这方面,已经作出许多努力,从GNNTExplainer、XGNN和PGExplainer等角度解释这些模型的预测机制,虽然这类工作为解释GNNs提供了系统框架,但无法对可解释的GNNs进行全面审查。在这次调查中,我们全面审查了为GNNs开发的可解释性技术。我们侧重于可解释的图形神经网络,并根据使用可解释的方法对其进行分类。我们还为GNNNs的解释提供了通用的性能指标,并指明了未来的几个研究方向。