Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning, where graph contrastive learning (GCL) is trained with the self-supervision signals containing data-data pairs. These data-data pairs are generated with augmentation employing stochastic functions on the original graph. We argue that some features can be more critical than others depending on the downstream task, and applying stochastic function uniformly, will vandalize the influential features, leading to diminished accuracy. To fix this issue, we introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features and corrupts the remaining ones. We implement FebAA as plug and play layer and use it with state-of-the-art Deep Graph Contrastive Learning (GRACE) and Bootstrapped Graph Latents (BGRL). We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.
翻译:自我监督学习的目的是消除图示演示学习中昂贵的注解需求,在图形对比学习中,用含有数据对口的自监督信号对图形对比学习进行培训。这些数据对口是利用原始图形上使用随机功能生成的增强功能生成的。我们认为,某些特征比其他特征更为关键,取决于下游任务,统一应用随机功能,将破坏有影响力的特征,导致准确性降低。为解决这一问题,我们引入了一种基于地貌的适应强化(FebAA)方法,确定并保存潜在有影响力的特征,腐蚀其余的特征。我们用FUBAA作为插头和播放层,并使用最新工艺的深图相对立学习(GRACE)和Bootstraptaped Grap Lents(BGRL)来使用。我们成功地提高了GACE和BGRL在八个图形代表学习基准数据集上的准确性。