Semi-supervised learning on graphs is an important problem in the machine learning area. In recent years, state-of-the-art classification methods based on graph neural networks (GNNs) have shown their superiority over traditional ones such as label propagation. However, the sophisticated architectures of these neural models will lead to a complex prediction mechanism, which could not make full use of valuable prior knowledge lying in the data, e.g., structurally correlated nodes tend to have the same class. In this paper, we propose a framework based on knowledge distillation to address the above issues. Our framework extracts the knowledge of an arbitrary learned GNN model (teacher model), and injects it into a well-designed student model. The student model is built with two simple prediction mechanisms, i.e., label propagation and feature transformation, which naturally preserves structure-based and feature-based prior knowledge, respectively. In specific, we design the student model as a trainable combination of parameterized label propagation and feature transformation modules. As a result, the learned student can benefit from both prior knowledge and the knowledge in GNN teachers for more effective predictions. Moreover, the learned student model has a more interpretable prediction process than GNNs. We conduct experiments on five public benchmark datasets and employ seven GNN models including GCN, GAT, APPNP, SAGE, SGC, GCNII and GLP as the teacher models. Experimental results show that the learned student model can consistently outperform its corresponding teacher model by 1.4% - 4.7% on average. Code and data are available at https://github.com/BUPT-GAMMA/CPF
翻译:图表上的半监督学习是机器学习领域的一个重要问题。 近年来,基于图形神经网络(GNNS)的最先进的分类方法显示其优于传统方法,如标签传播。然而,这些神经模型的复杂结构将导致复杂的预测机制,无法充分利用数据中的宝贵先前知识,例如,结构相关节点往往具有相同的类别。在本文件中,我们提议了一个基于知识蒸馏的框架,以解决上述问题。我们的框架提取了基于图形神经网络(GNN)的任意学习GNN模型(教师模型)的知识,并将它注入了设计良好的学生模型。学生模型的构建有两个简单的预测机制,即标签传播和特征转换,自然保存了数据中基于结构和基于特征的知识。具体地说,我们设计学生模型,作为参数化教师模型模型的师级模型传播和特征转换模块。因此,学习的学生可以受益于GNNNNNG的先前知识和GNNN模型(教师模型)的标准化模型/特征变异模式,包括GNFA的标准化的G模型,我们用GG的模型和GG的模型来进行更透明的G-G-G-G-G-G-G-G-G-G-G-G-G-C-C-C-C-C-C-C-S-C-C-C-C-C-C-C-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-