Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks(GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily rely on a large amount of task-specific supervision. To reduce labeling requirement, the "pre-train, fine-tune" and "pre-train, prompt" paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model in a task-specific manner. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt.
翻译:图形神经网络(GNNs)已经成为一个强大的图表演示学习工具,但在终端到终端监督的环境中,其性能严重依赖大量的任务特定监督。为了减少标签要求,“前培训、微调”和“前培训、即时”范式已变得越来越常见。特别是,催化是自然语言处理中微调的一种流行的替代方法,目的是缩小培训前和下游目标之间的差距,目的是以特定任务的方式缩小培训前和下游目标之间的差距。然而,现有的图表催化研究仍然有限,缺乏对不同下游任务的普遍吸引力。在本文中,我们提出了图示Prompt,一个新的预培训和提示框架。图表Prompt不仅将培训前和下游任务整合成一个共同的任务模板,而且利用一种可学习的即时捷性协助下游任务,将最相关的知识从前培训模式定位到任务特定任务分析和分析方法。最后,我们进行了五大范围的数据实验。</s>