Distilling high-accuracy Graph Neural Networks~(GNNs) to low-latency multilayer perceptrons~(MLPs) on graph tasks has become a hot research topic. However, MLPs rely exclusively on the node features and fail to capture the graph structural information. Previous methods address this issue by processing graph edges into extra inputs for MLPs, but such graph structures may be unavailable for various scenarios. To this end, we propose a Prototype-Guided Knowledge Distillation~(PGKD) method, which does not require graph edges~(edge-free) yet learns structure-aware MLPs. Specifically, we analyze the graph structural information in GNN teachers, and distill such information from GNNs to MLPs via prototypes in an edge-free setting. Experimental results on popular graph benchmarks demonstrate the effectiveness and robustness of the proposed PGKD.
翻译:摘要:在图任务中将高准确性的图神经网络(GNNs)蒸馏为低延迟多层感知器(MLPs)已经成为研究热点。然而,MLPs仅依赖于节点特征,无法捕捉图的结构信息。以前的方法通过将图边处理为MLPs的额外输入来解决这个问题,但这些图结构在各种情况下可能无法得到。为此,我们提出了一种原型引导知识蒸馏(PGKD)方法,它不需要图边,但可以学习具有结构意识的MLPs。具体来说,我们分析GNNs中的图结构信息,通过原型在无边的情况下从GNNs到MLPs蒸馏这些信息。在流行的图基准上的实验结果表明,所提出的PGKD方法是有效和稳健的。