Knowledge distillation (KD) transfers knowledge from a teacher network to a student by enforcing the student to mimic the outputs of the pretrained teacher on training data. However, data samples are not always accessible in many cases due to large data sizes, privacy, or confidentiality. Many efforts have been made on addressing this problem for convolutional neural networks (CNNs) whose inputs lie in a grid domain within a continuous space such as images and videos, but largely overlook graph neural networks (GNNs) that handle non-grid data with different topology structures within a discrete space. The inherent differences between their inputs make these CNN-based approaches not applicable to GNNs. In this paper, we propose to our best knowledge the first dedicated approach to distilling knowledge from a GNN without graph data. The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multinomial distribution. We then introduce a gradient estimator to optimize this framework. Essentially, the gradients w.r.t. graph structures are obtained by only using GNN forward-propagation without back-propagation, which means that GFKD is compatible with modern GNN libraries such as DGL and Geometric. Moreover, we provide the strategies for handling different types of prior knowledge in the graph data or the GNNs. Extensive experiments demonstrate that GFKD achieves the state-of-the-art performance for distilling knowledge from GNNs without training data.
翻译:知识蒸馏(KD) 将知识从教师网络向学生转移,强制学生模仿受过培训的教师在培训数据方面的产出,从而将知识从教师网络向学生转移。然而,由于数据大小、隐私或保密性强,数据样本在许多情况下并不总是容易获得。为了解决这个问题,已经做出了许多努力,为革命性神经网络(CNNS)解决了这个问题,这些网络的投入是在图像和视频等连续空间的一个网格域内,但大多被忽略了图形神经网络(GNNS),这些网络在离散空间内处理不同地形结构的非网络数据。它们的投入之间的内在差异使得这些基于CNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNG/G/GNG/GNGNGNGNG/GNGNGNG/CMDMGODGODMGODODODODODOIGODOIGODOGODODODODODODODODODODODODODODODODOODODOOOOOOOOOOOOOOOO, AS AS, AS AS AS AS ASMGOGOGOGOGOGODODODODODGOGOGOGOGOGOGODODODO,我们,我们,我们,我们ODODODGOGOGOGOO,我们,我们ODOGOGOGOGOGODGODGODGODGODGODGOOO,我们,我们,我们的SOOOOOOOOO,我们,我们SOOGOGOOOOODGOOOOOOOO,我们,我们,我们,我们OOO,我们