Existing Graph Neural Networks (GNNs) follow the message-passing mechanism that conducts information interaction among nodes iteratively. While considerable progress has been made, such node interaction paradigms still have the following limitation. First, the scalability limitation precludes the broad application of GNNs in large-scale industrial settings since the node interaction among rapidly expanding neighbors incurs high computation and memory costs. Second, the over-smoothing problem restricts the discrimination ability of nodes, i.e., node representations of different classes will converge to indistinguishable after repeated node interactions. In this work, we propose a novel hop interaction paradigm to address these limitations simultaneously. The core idea is to convert the interaction target among nodes to pre-processed multi-hop features inside each node. We design a simple yet effective HopGNN framework that can easily utilize existing GNNs to achieve hop interaction. Furthermore, we propose a multi-task learning strategy with a self-supervised learning objective to enhance HopGNN. We conduct extensive experiments on 12 benchmark datasets in a wide range of domains, scales, and smoothness of graphs. Experimental results show that our methods achieve superior performance while maintaining high scalability and efficiency. The code is at https://github.com/JC-202/HopGNN.
翻译:现有图形神经网络(GNNs)遵循信息传递机制,在节点之间反复进行信息互动。虽然已经取得了相当大的进展,但节点互动模式仍然有以下限制。首先,可扩缩性限制排除了在大规模工业环境中广泛应用GNNs,因为快速扩大的邻国之间的节点互动会产生高计算和记忆成本。第二,超移动问题限制了节点的歧视能力,即,不同班级的节点表达方式在反复节点互动后会趋向不可分化。在这项工作中,我们提出了一个新的跳式互动模式,以同时解决这些限制。核心理念是将节点之间的互动目标转换为每个节点内预处理的多点点功能。我们设计了一个简单而有效的HPGNNN框架,可以很容易利用现有的GNNS实现跳动互动。此外,我们提出了一个多任务学习战略,以自我监督的学习目标加强HPGNN。我们在广泛的领域、尺度和20个基准数据集上进行广泛的实验。在高质量/光度上显示我们高空度的实验方法。</s>