Distilling high-accuracy Graph Neural Networks~(GNNs) to low-latency multilayer perceptrons~(MLPs) on graph tasks has become a hot research topic. However, MLPs rely exclusively on the node features and fail to capture the graph structural information. Previous methods address this issue by processing graph edges into extra inputs for MLPs, but such graph structures may be unavailable for various scenarios. To this end, we propose a Prototype-Guided Knowledge Distillation~(PGKD) method, which does not require graph edges~(edge-free) yet learns structure-aware MLPs. Specifically, we analyze the graph structural information in GNN teachers, and distill such information from GNNs to MLPs via prototypes in an edge-free setting. Experimental results on popular graph benchmarks demonstrate the effectiveness and robustness of the proposed PGKD.
翻译:知识蒸馏高精度的图神经网络到低延迟的多层感知机在图任务上已成为热门的研究主题。然而,多层感知机仅依赖于节点特征,无法捕捉到图结构信息。先前的方法通过处理图边缘成为多层感知机的额外输入来解决此问题,但在各种情况下这样的图结构可能无法获得。因此,我们提出了一种原型引导知识蒸馏(PGKD)方法,它不需要图的边界但仍能学习有结构感知的多层感知机。具体而言,我们分析了GNN教师中的图结构信息,并通过原型在无边界的情况下从GNNs向MLPs中蒸馏这样的信息。流行的图基准实验结果证明了所提出的PGKD方法的有效性和鲁棒性。