Graph Neural Networks~(GNNs) are effective tools for graph representation learning. Most GNNs rely on a recursive neighborhood aggregation scheme, named message passing, thereby their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1-WL). Motivated by the success of retrieval-based models and off-the-shelf high-performance retrieval systems, we propose a non-parametric and model-agnostic scheme called GraphRetrieval to boost existing GNN models. In GraphRetrieval, similar training graphs associated with their ground-truth labels are retrieved as an enhancement to be jointly utilized with the input graph representation to complete various graph property predictive tasks. In particular, to effectively "absorb" useful information from retrieved graphs and "ignore" possible noise, we introduce an adapter based on self-attention to explicitly learn the interaction between an input graph and its retrieved similar graphs. By experimenting with three classic GNN models on 12 different datasets, we have demonstrated GraphRetrieval is able to bring substantial improvements to existing GNN models without comprising the model size and the prediction efficiency. Our work also firstly validates the feasibility and effectiveness of retrieved-enhanced graph neural networks.
翻译:Neural Networks ~ (GNNS) 是图形代表学习的有效工具 。 大多数 GNNS 都依赖于一个循环邻里聚合计划, 命名信息传递, 从而它们的理论表达力仅限于第一个顺序 Weisfeiler- Lehman 测试 (1- WL) 。 受基于检索的模型和现成的高性能检索系统的成功驱动, 我们提出一个非参数和模型- 不可知性方案, 称为 Graph Retrieval, 以提升现有的 GNNN 模型。 在GreatRetrieval 中, 与地面真实标签相关的三个经典GNN 模型被回收, 作为一种强化, 与输入图的表达方式共同使用, 以完成各种图形属性预测任务。 特别是为了有效地“ 吸附” 从回收的图形和“ 亮度” 可能的噪音中获取有用的信息, 我们引入了一个适应器, 以明确学习输入图形与其检索过的类似图形之间的相互作用。 通过在12个不同的数据集上试验三种经典 GNNNNE 模型, 我们演示了GRetrevaleval 能够使现有的GNNS 模型能够对现有的预测模型进行实质性的模型进行重大改进。