Contrastive learning is an effective unsupervised method in graph representation learning, and the key component of contrastive learning lies in the construction of positive and negative samples. Previous methods usually utilize the proximity of nodes in the graph as the principle. Recently, the data augmentation based contrastive learning method has advanced to show great power in the visual domain, and some works extended this method from images to graphs. However, unlike the data augmentation on images, the data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples, which leaves much space for improvement. In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints. We develop a new technique called information regularization for stable training and use subgraph sampling for scalability. We generalize our method from node-level contrastive learning to the graph-level by treating each graph instance as a supernode. ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks on real-world datasets. We further demonstrate that ARIEL is more robust in face of adversarial attacks.
翻译:对比性学习是图示显示学习中一种有效的、不受监督的方法,对比性学习的关键部分在于构建正和负样本。 以往的方法通常使用图中节点的近距离作为原则。 最近,基于数据增强的对比性学习方法已经进步,在视觉领域显示出巨大的力量,有些作品将这种方法从图像扩大到图形。 然而,与图像数据增强不同,图表上的数据增强远非直观性强,提供高质量的对比性样本也难得多,这留下很大的改进空间。 在这项工作中,通过引入数据增强的对称图形视图视图,我们提出了一个简单而有效的方法,即反向图对比性学习(ARIEL),以在合理的限制范围内提取信息性对比性样本。我们开发了一种新技术,要求稳定培训的信息规范化,并使用子图取样可缩放性。我们将我们的方法从无偏度对比性学习到图形一级,通过将每个图形实例作为超级节点处理,从而持续超越当前图表对比性学习方法,在现实的面和图像水平上,我们更有力地展示了当前对准性AIA攻击的对比性研究方法。