Contrastive learning has become a key component of self-supervised learning approaches for graph-structured data. However, despite their success, existing graph contrastive learning methods are incapable of uncertainty quantification for node representations or their downstream tasks, limiting their application in high-stakes domains. In this paper, we propose a novel Bayesian perspective of graph contrastive learning methods showing random augmentations leads to stochastic encoders. As a result, our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector. By learning distributional representations, we provide uncertainty estimates in downstream graph analytics tasks and increase the expressive power of the predictive model. In addition, we propose a Bayesian framework to infer the probability of perturbations in each view of the contrastive model, eliminating the need for a computationally expensive search for hyperparameter tuning. We empirically show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
翻译:对比性学习已成为图形结构数据自我监督学习方法的一个关键组成部分。然而,尽管这些方法取得了成功,但现有的图表对比性学习方法无法对节点表示或下游任务进行不确定性的量化,限制了它们在高取域的应用。在本文中,我们提出了一个新型的巴伊西亚角度的图表对比性学习方法显示随机增益导致随机随机偏差,从而形成随机偏差的编码器。因此,我们提议的方法代表着各种隐蔽空间中的每个节点的分布,与将每个节点嵌入确定性矢量的现有技术相比。我们通过学习分布式表达,在下游图表分析任务中提供不确定性的估计,并增加预测模型的表达力。此外,我们提议了一个巴伊西亚框架,以推断对比模型的每一种观点中发生扰动的可能性,从而消除了计算成本昂贵的超分度调试仪的必要性。我们从经验上表明,与几个基准数据集的现有状态方法相比,绩效有了相当大的改进。