Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are less explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. We first design four types of graph augmentations to incorporate various priors. We then systematically study the impact of various combinations of graph augmentations on multiple datasets, in four different settings: semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. Our codes are available at https://github.com/Shen-Lab/GraphCL.
翻译:在图形结构数据上,可普遍、可转让和强有力的代表性学习仍然是当前图形神经网络(GNNs)面临的一项挑战。与为图像数据革命神经网络(CNNs)开发的各种组合不同,对于GNS而言,自我监督的学习和预培训探索较少。在本文件中,我们提议了一个图形对比学习框架(GraphCL),用于学习未经监督的图形数据演示。我们首先设计四种类型的图形增强(GraphCL),以纳入各种前科。然后,我们系统地研究在四个不同环境下,即半监督、不受监督、转移学习以及对抗性攻击,各种图形增强组合对多个数据集的影响。结果显示,即使不调整增强范围或使用复杂的GNNS结构,我们的GregCL框架也可以产生类似或更好的通用性、可转移性、稳健性与最新方法相比的图表表达。我们还调查参数化图形增强程度和模式的影响,并观察初步实验中的进一步业绩成果。我们的代码可以在 https://giphreus/Graphrub.com查阅 http://GRAmb/GRAmb.com.