Recent studies show that graph convolutional network (GCN) often performs worse for low-degree nodes, exhibiting the so-called structural unfairness for graphs with long-tailed degree distributions prevalent in the real world. Graph contrastive learning (GCL), which marries the power of GCN and contrastive learning, has emerged as a promising self-supervised approach for learning node representations. How does GCL behave in terms of structural fairness? Surprisingly, we find that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN. We theoretically show that this fairness stems from intra-community concentration and inter-community scatter properties of GCL, resulting in a much clear community structure to drive low-degree nodes away from the community boundary. Based on our theoretical analysis, we further devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes. Extensive experiments on various benchmarks and evaluation protocols validate the effectiveness of the proposed method.
翻译:最近的研究显示,图形革命网络(GCN)在低度节点方面往往表现得更差,展示了在现实世界中普遍存在的长尾分布图的所谓结构不公平。对比式学习(GCL)结合了GCN的力量和对比式学习,已成为一种充满希望的自我监督的学习节点表达方式。GCL在结构公平方面如何表现?令人惊讶的是,我们发现GCL方法的表述方式已经比GCN所学的方法在程度上更加公平偏差。我们理论上表明,这种公平性来自社区内部的集中和社区之间的分布特性,导致一个非常明确的社区结构,将低度节点驱离社区边界。我们根据我们的理论分析,进一步设计了一种新型的图形增强方法,称为Graph 对比式的偏差(GRADE),它对低度和高度节点采用了不同的战略。关于各种基准的广泛实验和评估协议证实了拟议方法的有效性。