Existing Graph Neural Networks (GNNs) follow the message-passing mechanism that conducts information interaction among nodes iteratively. While considerable progress has been made, such node interaction paradigms still have the following limitation. First, the scalability limitation precludes the broad application of GNNs in large-scale industrial settings since the node interaction among rapidly expanding neighbors incurs high computation and memory costs. Second, the over-smoothing problem restricts the discrimination ability of nodes, i.e., node representations of different classes will converge to indistinguishable after repeated node interactions. In this work, we propose a novel hop interaction paradigm to address these limitations simultaneously. The core idea is to convert the interaction target among nodes to pre-processed multi-hop features inside each node. We design a simple yet effective HopGNN framework that can easily utilize existing GNNs to achieve hop interaction. Furthermore, we propose a multi-task learning strategy with a self-supervised learning objective to enhance HopGNN. We conduct extensive experiments on 12 benchmark datasets in a wide range of domains, scales, and smoothness of graphs. Experimental results show that our methods achieve superior performance while maintaining high scalability and efficiency. The code is at https://github.com/JC-202/HopGNN.
翻译:现有的图神经网络(GNN)采用迭代的消息传递机制,实现节点之间的信息交互。虽然已经取得了令人瞩目的进展,但这种节点交互范式仍存在以下限制。首先,可扩展性限制阻碍了GNN在大规模工业环境中的广泛应用,因为节点之间快速扩展的邻居交互会导致高计算和内存成本。其次,过度平滑问题限制了节点的区分能力,即经过重复的节点交互后,不同类别的节点表示将收敛到无法区分的状态。在本研究中,我们提出了一种新的跳跃交互范式,以同时解决这些限制。其核心思想是将节点之间的交互目标转化为每个节点内部预处理的多跳特征。我们设计了一个简单而有效的HopGNN框架,可以轻松地利用现有的GNN来实现跳跃交互。此外,我们提出了一种多任务学习策略和一个自监督学习目标,以增强HopGNN。我们在12个基准数据集上进行了广泛的实验,这些数据集涵盖了各种领域、规模和图的平滑度。实验结果表明,我们的方法在保持高可扩展性和高效性的同时,实现了卓越的性能。代码位于 https://github.com/JC-202/HopGNN。