Recent works explore learning graph representations in a self-supervised manner. In graph contrastive learning, benchmark methods apply various graph augmentation approaches. However, most of the augmentation methods are non-learnable, which causes the issue of generating unbeneficial augmented graphs. Such augmentation may degenerate the representation ability of graph contrastive learning methods. Therefore, we motivate our method to generate augmented graph by a learnable graph augmenter, called MEta Graph Augmentation (MEGA). We then clarify that a "good" graph augmentation must have uniformity at the instance-level and informativeness at the feature-level. To this end, we propose a novel approach to learning a graph augmenter that can generate an augmentation with uniformity and informativeness. The objective of the graph augmenter is to promote our feature extraction network to learn a more discriminative feature representation, which motivates us to propose a meta-learning paradigm. Empirically, the experiments across multiple benchmark datasets demonstrate that MEGA outperforms the state-of-the-art methods in graph self-supervised learning tasks. Further experimental studies prove the effectiveness of different terms of MEGA.
翻译:最近的工作以自我监督的方式探索学习图解的表达方式。 在图形对比学习中, 基准方法应用了各种图形增强方法。 但是, 大部分增强方法都是不可忽略的, 从而导致生成非受益性放大图。 这样的增加可能会削弱图形对比学习方法的表达能力。 因此, 我们激励我们的方法, 通过一个可学习的图形增强器( 称为 MEGA (MEGA) ) 来生成扩展图。 然后我们澄清“ 良好” 图形增强必须在特征层次的试例级别和信息性上具有一致性。 为此, 我们提出一种新的方法, 学习一个图形增强器, 能够以统一和提供信息的方式产生增强。 图形增强器的目标是促进我们的特征提取网络, 学习更具歧视性的特征代表方式, 从而激励我们提出一个可学习的元模型。 从多个基准数据集的实验中可以看出, MEGA在图形自我监督的学习任务中超越了最先进的方法。 进一步的实验研究证明了 MEGA 不同条件的有效性 。