Several recent works use positional encodings to extend the receptive fields of graph neural network (GNN) layers equipped with attention mechanisms. These techniques, however, extend receptive fields to the complete graph, at substantial computational cost and risking a change in the inductive biases of conventional GNNs, or require complex architecture adjustments. As a conservative alternative, we use positional encodings to expand receptive fields to $r$-hop neighborhoods. More specifically, our method augments the input graph with additional nodes/edges and uses positional encodings as node and/or edge features. We thus modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. This makes our method model-agnostic, i.e. compatible with any existing GNN architectures. We also provide examples of positional encodings that are lossless with a one-to-one map between the original and the modified graphs. We demonstrate that extending receptive fields via positional encodings and a virtual fully-connected node significantly improves GNN performance and alleviates over-squashing using small $r$. We obtain improvements on a variety of models and datasets, and reach state-of-the-art performance using traditional GNNs or graph Transformers.
翻译:最近的一些工程使用位置编码来扩展带有关注机制的图形神经网络层的可接收域。 但是,这些技术以大量的计算成本将可接收域扩展至完整的图形,并冒着改变传统GNN的感化偏向的风险,或者需要复杂的结构调整。作为一种保守的替代办法,我们使用位置编码将可接收域扩大到$$-hop 相邻区域。更具体地说,我们的方法用额外的节点/屏蔽来扩大输入图,并将定位编码用作节点和/或边缘特征。因此,我们在将其输入到下游GNN模型之前对图形进行修改,而不是修改模型本身。这使得我们的方法模型-不可知性,即与现有的GNN的任何G结构相容。我们还提供了定位编码的例子,它们与原始图和修改后的图表之间的一对一地图无损。我们证明,通过定位编码和虚拟完全连接的节点来扩展可容纳域域,大大改进GNNN的性能,并用小的$-r$传统图形模型改进。我们利用州和州面的模型和州面的模型改进。