Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph, 1) the similarity between the original graph and the generated augmented graph gradually decreases; 2) the discrimination between all nodes within each augmented view gradually increases. In this paper, we argue that both such prior information can be incorporated (differently) into the contrastive learning paradigm following our general ranking framework. In particular, we first interpret CL as a special case of learning to rank (L2R), which inspires us to leverage the ranking order among positive augmented views. Meanwhile, we introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained and also be less altered to the perturbations of different degrees. Experiment results on various benchmark datasets verify the effectiveness of our algorithm compared with the supervised and unsupervised models.
翻译:受对比性学习(CL)令人印象深刻的成功启发,运用了各种图表增强战略,以自我监督的方式学习节点表征。现有方法通过在图表结构或节点属性中添加扰动来构建对比性样本。虽然取得了令人印象深刻的成果,但对于先前假设的大量信息却相当盲目:随着原始图上应用的扰动度的提高,1 原始图与生成的强化图之间的相似性逐渐减少;2 每种强化视图中的所有节点之间的差别逐渐增加。在本文中,我们认为,这两种先前的信息都可以(不同地)纳入我们总体排名框架之后的对比性学习范式中。特别是,我们首先将CLL解释为学习排级的特殊案例(L2R),这激励我们利用积极增强观点之间的排序顺序。与此同时,我们引入了自定等级的范式,以确保不同节点之间的歧视性信息得以保持,并且也较少被改变为不同度的扭曲性。关于各种基准数据集的实验结果可以验证我们算算算算出与监督和未监督的模型相比的算法的有效性。