We study self-supervised learning on graphs using contrastive methods. A general scheme of prior methods is to optimize two-view representations of input graphs. In many studies, a single graph-level representation is computed as one of the contrastive objectives, capturing limited characteristics of graphs. We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics. To this end, we propose a group contrastive learning framework in this work. Our framework embeds the given graph into multiple subspaces, of which each representation is prompted to encode specific characteristics of graphs. To learn diverse and informative representations, we develop principled objectives that enable us to capture the relations among both intra-space and inter-space representations in groups. Under the proposed framework, we further develop an attention-based representor function to compute representations that capture different substructures of a given graph. Built upon our framework, we extend two current methods into GroupCL and GroupIG, equipped with the proposed objective. Comprehensive experimental results show our framework achieves a promising boost in performance on a variety of datasets. In addition, our qualitative results show that features generated from our representor successfully capture various specific characteristics of graphs.
翻译:我们用对比方法对图表进行自我监督的学习。 以往方法的一般方案是优化输入图表的双视图表达方式。 在许多研究中, 将单一图形层次的表示方式作为对比性目标之一来计算, 捕捉图表的有限特性。 我们争辩说, 多个子空间的对比图形使图形编码器能够捕捉更丰富的特性。 为此, 我们提议了一个在这项工作中采用集体对比性学习框架。 我们的框架将给定图表嵌入多个子空间, 其中每个代表方式都用来编码图表的具体特性。 为了了解不同和资料性的说明, 我们制定了原则性的目标, 使我们能够捕捉到空间内部和空间之间代表形式之间的关系。 在拟议框架内, 我们进一步开发了基于关注的表示器功能, 以配置不同图表的子结构。 我们从我们的框架中, 我们把两种当前的方法推广到GroupCL和GIG组, 配有拟议的目标。 全面实验结果显示我们的框架在各种数据集的性能上取得了有希望的提高。 此外, 我们的质量结果显示, 我们的特征代表了我们从各种图表中产生的具体特征。