Graph neural networks (GNN) have shown great advantages in many graph-based learning tasks but often fail to predict accurately for a task-based on sets of nodes such as link/motif prediction and so on. Many works have recently proposed to address this problem by using random node features or node distance features. However, they suffer from either slow convergence, inaccurate prediction, or high complexity. In this work, we revisit GNNs that allow using positional features of nodes given by positional encoding (PE) techniques such as Laplacian Eigenmap, Deepwalk, etc. GNNs with PE often get criticized because they are not generalizable to unseen graphs (inductive) or stable. Here, we study these issues in a principled way and propose a provable solution, a class of GNN layers termed PEG with rigorous mathematical analysis. PEG uses separate channels to update the original node features and positional features. PEG imposes permutation equivariance w.r.t. the original node features and rotation equivariance w.r.t. the positional features simultaneously. Extensive link prediction experiments over 8 real-world networks demonstrate the advantages of PEG in generalization and scalability.
翻译:在许多基于图形的学习任务中,图形神经网络(GNN)显示出了巨大的优势,但往往无法准确预测基于链接/移动预测等节点的一组任务。许多工程最近建议使用随机节点特征或节点距离特征来解决这一问题,但是,它们要么是缓慢的趋同、不准确的预测,要么是高度复杂的。在这项工作中,我们重新审视GNN,允许使用定位编码技术(PE)给出的节点的定位特征,如Laplacian Eigenmap、Deepwalk等。 与PE的GNNNs经常受到批评,因为它们无法被普通化到看不见的图表(感化)或稳定。在这里,我们以一种有原则性的方式研究这些问题并提出一个可行的解决方案,即GNNT层的分类称为PEG,进行严格的数学分析。 PEG使用不同的渠道更新原有节点特征和定位特征。 PEG将定位网络的原始节点特征和旋转等功能设置原始节点特征和旋转等变量 w.r.