Graph neural networks (GNNs) integrate deep architectures and topological structure modeling in an effective way. However, the performance of existing GNNs would decrease significantly when they stack many layers, because of the over-smoothing issue. Node embeddings tend to converge to similar vectors when GNNs keep recursively aggregating the representations of neighbors. To enable deep GNNs, several methods have been explored recently. But they are developed from either techniques in convolutional neural networks or heuristic strategies. There is no generalizable and theoretical principle to guide the design of deep GNNs. To this end, we analyze the bottleneck of deep GNNs by leveraging the Dirichlet energy of node embeddings, and propose a generalizable principle to guide the training of deep GNNs. Based on it, a novel deep GNN framework -- EGNN is designed. It could provide lower and upper constraints in terms of Dirichlet energy at each layer to avoid over-smoothing. Experimental results demonstrate that EGNN achieves state-of-the-art performance by using deep layers.
翻译:图形神经网络(GNNs) 以有效的方式整合深层建筑和地形结构模型。 但是,现有的GNNs在堆积多层时,其性能将大大降低,因为存在过度悬浮的问题。 当 GNNs 保持循环整合邻居的表达方式时, 节点嵌入往往会与类似的矢量趋同。 为了能够实现深层GNNs, 最近已经探索了几种方法。 但是, 它们是从革命性神经网络或超自然战略的技术中开发出来的。 没有可普遍适用的理论原则来指导深层GNNs的设计。 为此, 我们通过利用节点嵌入的Drichlet能量分析深层GNNs的瓶颈, 并提出了一个可用于指导深层GNNs培训的普遍原则。 在此基础上, 设计了一个新型的深层GNNNN框架。 它可以在每一层的Dritlet能源方面提供更低和更高的限制,以避免过度覆盖。 实验结果显示, GNNNN通过利用深层实现最先进的业绩, 。