Graph Neural Networks (GNNs) are effective tools for graph representation learning. Most GNNs rely on a recursive neighborhood aggregation scheme, named message passing. In this paper, motivated by the success of retrieval-based models, we propose a non-parametric scheme called GraphRetrieval, in which similar training graphs associated with their ground-truth labels are retrieved to be jointly utilized with the input graph representation to complete various graph-based predictive tasks. In particular, we take a well-trained model with its parameters fixed and then we add an adapter based on self-attention with only a few trainable parameters per task to explicitly learn the interaction between an input graph and its retrieved similar graphs. Our experiments on 12 different datasets involving different tasks (classification and regression) show that GraphRetrieval is able to achieve substantial improvements on all twelve datasets compared to three strong GNN baseline models. Our work demonstrates that GraphRetrieval is a promising augmentation for message passing.
翻译:图形神经网络( GNNs) 是图形演示学习的有效工具 。 大多数 GNNs 都依赖于一个循环邻里汇总计划, 命名为传递信息 。 在本文中, 我们以基于检索模式的成功为动力, 提议了一个非参数性计划, 名为“ GraphRetrieval ”, 其中, 与其地面图标签相关的类似培训图表被检索出来, 与输入图示共同使用, 以完成各种基于图形的预测任务 。 特别是, 我们采用了一个训练有素的模型, 其参数已经固定, 然后我们添加了一个基于自我关注的适应器, 每件任务中只有少量可训练的参数, 以明确学习输入图与其检索过的类似图表之间的相互作用 。 我们在12个不同数据集上进行的实验显示, 与3个强大的 GNN 基准模型相比, “ 图表检索” 能够大大改进所有12个数据集 。 我们的工作表明, “ 图表REival” 是一个有希望的信息传递的扩展 。