Graph Neural Networks (GNNs) have achieved significant success in learning better representations by performing feature propagation and transformation iteratively to leverage neighborhood information. Nevertheless, iterative propagation restricts the information of higher-layer neighborhoods to be transported through and fused with the lower-layer neighborhoods', which unavoidably results in feature smoothing between neighborhoods in different layers and can thus compromise the performance, especially on heterophily networks. Furthermore, most deep GNNs only recognize the importance of higher-layer neighborhoods while yet to fully explore the importance of multi-hop dependency within the context of different layer neighborhoods in learning better representations. In this work, we first theoretically analyze the feature smoothing between neighborhoods in different layers and empirically demonstrate the variance of the homophily level across neighborhoods at different layers. Motivated by these analyses, we further propose a tree decomposition method to disentangle neighborhoods in different layers to alleviate feature smoothing among these layers. Moreover, we characterize the multi-hop dependency via graph diffusion within our tree decomposition formulation to construct Tree Decomposed Graph Neural Network (TDGNN), which can flexibly incorporate information from large receptive fields and aggregate this information utilizing the multi-hop dependency. Comprehensive experiments demonstrate the superior performance of TDGNN on both homophily and heterophily networks under a variety of node classification settings. Extensive parameter analysis highlights the ability of TDGNN to prevent over-smoothing and incorporate features from shallow layers with deeper multi-hop dependencies, which provides new insights towards deeper graph neural networks. Code of TDGNN: http://github.com/YuWVandy/TDGNN
翻译:神经网络(GNNs) 在学习更好的表现方面取得了显著的成功。 然而,迭代传播限制了高层次社区通过不同层次社区之间流转和与低层社区融合的信息,这不可避免地导致不同层社区间平滑,从而可能损害不同层次社区之间的性能,特别是杂乱网络的性能。此外,大多数深层GNNs仅承认高层次社区的重要性,而同时尚未充分探索不同层社区内部多层结构依赖性的重要性,学习更好的表现。在这项工作中,我们首先从理论上分析不同层社区间平滑和与低层社区融合的信息,这不可避免地造成不同层社区间平滑,从而可能损害到不同层的性能。我们进一步建议一种树分解方法,以分散不同层之间的性能。 此外,我们通过树层更深层的更深层数据分布,防止多层的血压依赖性关系,以构建树层、更深层、更深层的数学网络(TDGNNNNN) 的深度(TDG) 网络(TDG) 和多层性G(G) 的多层性实验中,可以灵活地将G) 的性数据化的性数据化化化化化地纳入。