Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.
翻译:图表神经网络(GNNs)被广泛用于学习图表结构数据的强大代表性。最近的工作表明,从自我监督的任务向下游任务转移知识可以进一步改进图形代表性。然而,在优化目标和培训数据方面,自我监督的任务与下游任务之间存在内在差距。常规培训前方法在知识转让方面可能不够有效,因为它们没有为下游任务作任何调整。为了解决这些问题,我们提议了一个关于全球网络的新的转移学习模式,它可以有效地利用自我监督的任务作为辅助任务来帮助目标任务。我们的方法将在微调阶段适应性地选择不同的辅助任务,并将不同的辅助任务与目标任务结合起来。我们设计了一个适应性辅助性减少加权模型,通过量化辅助任务与目标任务的一致性来学习辅助任务。此外,我们通过元学习来学习学习来学习加权模型。我们的方法可以应用于各种转让学习方法,它不仅在多任务学习中,而且在培训前和微调中运行良好。关于多个下游任务的全面实验表明,拟议的方法可以有效地将辅助任务与目标任务结合起来。