Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that finetuning the IGSD-trained models with self-training can further improve the graph representation power. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
翻译:最近,人们越来越关注如何以歧视方式对图表进行矢量分析的挑战。 为了解决这个问题,我们提出了一种名为“迭代图形自我蒸馏”(IGSD)的方法,该方法通过使用自我监督的对比性学习方法,通过实例歧视,以不受监督的方式学习图形层的代表性。IGSD涉及教师-学生蒸馏过程,使用图形扩散扩增,并使用学生模型的指数移动平均数构建教师模型。IGSD背后的直觉是预测不同增强的视图下图形组的教师网络代表性。作为一种自然扩展,我们还将IGSD应用到半监督的情景中,将网络与受监督的和自监督的对比性损失联合正规化。最后,我们表明,用自我培训来微调经IGSD培训的模型,可以进一步提高图形代表能力。 想象的是,在未经监督的和半超强的环境下,我们在各种图形数据集中都取得了显著和一致的业绩收益,这很好地证实了IGSD的优势。