Graph Neural Networks (GNNs) have become the state-of-the-art method for many applications on graph structured data. GNNs are a framework for graph representation learning, where a model learns to generate low dimensional node embeddings that encapsulate structural and feature-related information. GNNs are usually trained in an end-to-end fashion, leading to highly specialized node embeddings. While this approach achieves great results in the single-task setting, generating node embeddings that can be used to perform multiple tasks (with performance comparable to single-task models) is still an open problem. We propose a novel training strategy for graph representation learning, based on meta-learning, which allows the training of a GNN model capable of producing multi-task node embeddings. Our method avoids the difficulties arising when learning to perform multiple tasks concurrently by, instead, learning to quickly (i.e. with a few steps of gradient descent) adapt to multiple tasks singularly. We show that the embeddings produced by a model trained with our method can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.
翻译:神经网图( GNN) 已经成为图形结构化数据中许多应用的最先进方法。 GNN是一个图形代表学习的框架, 模型可以学习生成包含结构和与特征有关信息的低维节点嵌入。 GNN通常会以端到端方式接受培训, 导致高度专业化的节点嵌入。 虽然这种方法在单任务设置中取得了巨大成果, 生成可用于执行多项任务( 与单任务模型相比的性能) 的节点嵌入仍然是一个尚未解决的问题 。 我们提议了一个基于元学习的图形代表学习新颖培训战略, 用于进行图形代表学习, 从而能够对能够生成多功能和多功能节点嵌入的GNN模式进行培训。 我们的方法避免了在学习同时执行多项任务时出现的困难, 而不是通过学习快速适应( 梯度下降的几步) 来适应多项任务。 我们显示, 由我们所培训的模型生成的嵌入模块, 能够用来执行多个任务, 具有可比的或令人吃惊的、 甚至更高性能比单一任务同时执行的多功能的多功能。 我们的方法可以用来执行多种任务。