Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
翻译:内建网络(GNNs)显示在各种图表学习任务上令人满意的表现。为了实现更好的适应能力,大多数GNNs都具有大量参数,这使得这些GNNs计算成本昂贵。因此,很难将它们安装在计算资源稀缺的边缘设备上,例如移动电话和可磨损的智能设备。知识蒸馏(KD)是压缩GNNs的一个共同解决办法,鼓励轻量级模型(即学生模式)模仿计算成本昂贵的GNN(即教师GNN模式)的行为。然而,大多数现有的GNNNs的KD方法缺乏公平性考虑。因此,学生模式通常继承甚至夸大了教师GNNN的偏差。为了处理这样一个问题,我们采取了初步步骤,为GNNS的公平知识蒸馏。我们首先为基于GNNNS的教师学习框架(即学生模式)制定了一个公平性知识蒸馏的新问题。然后,我们提议了一个名为RELANT的理论框架,以降低学生的准确性偏差,从而执行GNNNNNLI的多指标框架的设计。 。 具体地说,因此,可以从学生GNNNNNNT/NLI/NLILI/NT/NT/NT/DLILILI/D/RE设计, 任何不甚低级的研习进。