In recent years, prompt tuning has set off a research boom in the adaptation of pre-trained models. In this paper, we propose Graph Prompt as an efficient and effective alternative to full fine-tuning for adapting the pre-trianed GNN models to downstream tasks. To the best of our knowledge, we are the first to explore the effectiveness of prompt tuning on existing pre-trained GNN models. Specifically, without tuning the parameters of the pre-trained GNN model, we train a task-specific graph prompt that provides graph-level transformations on the downstream graphs during the adaptation stage. Then, we introduce a concrete implementation of the graph prompt, called GP-Feature (GPF), which adds learnable perturbations to the feature space of the downstream graph. GPF has a strong expressive ability that it can modify both the node features and the graph structure implicitly. Accordingly, we demonstrate that GPF can achieve the approximately equivalent effect of any graph-level transformations under most existing pre-trained GNN models. We validate the effectiveness of GPF on numerous pre-trained GNN models, and the experimental results show that with a small amount (about 0.1% of that for fine-tuning ) of tunable parameters, GPF can achieve comparable performances as fine-tuning, and even obtain significant performance gains in some cases.
翻译:近年来,快速调整在调整经过培训的模型方面引发了研究潮流。 在本文中,我们提出“Gima Prime”,作为全面微调的高效和有效替代方法,以调整经过培训的GNN模型,使其适应下游任务。据我们所知,我们首先探索对现有经过培训的GNN模型迅速调整的有效性。具体地说,在不调整经过培训的GNN模型参数的情况下,我们培训了一个特定任务图表提示,在适应阶段为下游图表提供图级转换。然后,我们引入了一个称为GP-Fature(GPF)的图表快速化(GPF)的具体实施,这为下游图的特征空间增添了可学习的扰动性。GPF具有很强的表达能力,它既能修改经过培训的GNNM模型,又能隐含蓄地修改现有的GNN模型结构。因此,我们证明GPF能够在大多数经过培训的模型下游图级转换中达到大致等效。我们验证了GPF对许多经过培训的模型的有效性,实验性结果显示,在微调程度的GPFPFS中取得了一定的微的成绩。