With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.
翻译:随着在关键现实应用中越来越多地使用图形神经网络(GNN),提出了若干临时解释方法,以了解其预测,然而,在模型培训期间没有就飞行作出解释,也没有利用这些解释来提高基础GNN模型的表达力。在这项工作中,我们为GNN、EXPAS(可传递信息分解信息分解信息分解信息)引入了新的解释导向神经信息传递框架,这种框架只能从节点和边缘中嵌入,而GNN解释方法则很重要。 EXPASS可以与现有的GNS结构和子图子优化解释器一起使用,学习准确的图形嵌入。我们理论上表明,EXPAS通过减缓Drichlet能源的层级明智损失,缓解GNNS过度移动的问题,将香气信息传递和EXPAS框架之间的差别缩小到各自的模型重量差异。我们的经验显示,利用EXPASS系统所学的图表嵌入改进了预测性能,并缓解了GNNS的超模版培训框架。