Contrastive learning has become a key component of self-supervised learning approaches for graph-structured data. Despite their success, existing graph contrastive learning methods are incapable of uncertainty quantification for node representations or their downstream tasks, limiting their application in high-stakes domains. In this paper, we propose a novel Bayesian perspective of graph contrastive learning methods showing random augmentations leads to stochastic encoders. As a result, our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector. By learning distributional representations, we provide uncertainty estimates in downstream graph analytics tasks and increase the expressive power of the predictive model. In addition, we propose a Bayesian framework to infer the probability of perturbations in each view of the contrastive model, eliminating the need for a computationally expensive search for hyperparameter tuning. We empirically show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
翻译:对比性学习已成为图形结构数据自我监督学习方法的一个关键组成部分。 尽管这些方法取得了成功, 现有的图表对比性学习方法无法对节点表示或下游任务进行不确定性的量化, 限制了它们在高取量领域的应用。 在本文中, 我们提出了一个新颖的巴耶斯式图表对比性学习方法的视角, 显示随机增益导致随机随机随机合成编码器。 因此, 我们的拟议方法代表着每一个节点, 与将每个节点嵌入确定性矢量的现有技术相比, 在潜伏空间的分布上, 代表着每一个节点的节点。 我们通过学习分布式表达, 在下游图形分析任务中提供不确定性的估计, 并增加预测性模型的表达力。 此外, 我们提出一个巴耶斯式框架, 以推算出每个对比模型的扰动概率, 从而消除了计算成本昂贵的超参数调试调的需要。 我们的经验显示, 与数个基准数据集的现有最新方法相比, 性能有相当大的改善。