Training deep graph neural networks (GNNs) poses a challenging task, as the performance of GNNs may suffer from the number of hidden message-passing layers. The literature has focused on the proposals of over-smoothing and under-reaching to explain the performance deterioration of deep GNNs. In this paper, we propose a new explanation for such deteriorated performance phenomenon, mis-simplification, that is, mistakenly simplifying graphs by preventing self-loops and forcing edges to be unweighted. We show that such simplifying can reduce the potential of message-passing layers to capture the structural information of graphs. In view of this, we propose a new framework, edge enhanced graph neural network(EEGNN). EEGNN uses the structural information extracted from the proposed Dirichlet mixture Poisson graph model, a Bayesian nonparametric model for graphs, to improve the performance of various deep message-passing GNNs. Experiments over different datasets show that our method achieves considerable performance increase compared to baselines.
翻译:深图神经网络(GNN)是一项艰巨的任务,因为GNN的性能可能因隐藏的信息传递层数量而受损。文献侧重于过度移动和影响不足的建议,以解释深GNN的性能恶化。在本文中,我们建议对这种恶化的性能现象,即错误简化,即通过防止自我泄漏和迫使边缘不加分来错误简化图表提出新的解释。我们表明,这种简化可以减少信息传递层获取图表结构信息的潜力。有鉴于此,我们提出一个新的框架,即边缘增强的图形神经网络(EEGNN)。EGNN使用从提议的Drichlet混合物Poisson图形模型中提取的结构信息,即巴伊斯式的无参数模型,以改进各种深信息传递GNN的性能。对不同数据集的实验表明,我们的方法比基线的性能有相当大的提高。