Graph neural networks (GNNs) have become the standard learning architectures for graphs. GNNs have been applied to numerous domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers. Possible graph PE are Laplacian eigenvectors. In this work, we propose to decouple structural and positional representations to make easy for the network to learn these two essential properties. We introduce a novel generic architecture which we call LSPE (Learnable Structural and Positional Encodings). We investigate several sparse and fully-connected (Transformer-like) GNNs, and observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
翻译:图形神经网络(GNNS)已成为图形的标准学习架构。 GNNS 已被应用于从量子化学、建议系统到知识图形和自然语言处理等许多领域。 任意图形的一个主要问题是缺乏节点的光学定位信息,这降低了GNNS的表达能力,以区分例如异形节点和其他图形对称。 解决这一问题的方法是引入节点的定位编码(PE),并将它注入输入输入层, 如在变换器中。 可能的图形PE 是 Laplacecian 源代码。 在这项工作中,我们建议分色结构和位置表达方式使网络容易学习这两个基本属性。 我们引入了一个新的通用结构架构, 我们称之为 LSPE(可定位结构和定位) 。 我们调查了几处稀疏和完全连接的(类似变异的) GNNPS, 并观察分子数据集的性能提高, 从2.87%到64.14%, 考虑为 GNNN2级学习PE 。