Quantifying predictive uncertainty of neural networks has recently attracted increasing attention. In this work, we focus on measuring uncertainty of graph neural networks (GNNs) for the task of node classification. Most existing GNNs model message passing among nodes. The messages are often deterministic. Questions naturally arise: Does there exist uncertainty in the messages? How could we propagate such uncertainty over a graph together with messages? To address these issues, we propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework, and models predictive uncertainty of node classification with Bayesian confidence of predictive probability and uncertainty of messages. Our method proposes a novel uncertainty propagation mechanism inspired by Gaussian models. Moreover, we present an uncertainty oriented loss for node classification that allows the GNNs to clearly integrate predictive uncertainty in learning procedure. Consequently, the training examples with large predictive uncertainty will be penalized. We demonstrate the BUP with respect to prediction reliability and out-of-distribution (OOD) predictions. The learned uncertainty is also analyzed in depth. The relations between uncertainty and graph topology, as well as predictive uncertainty in the OOD cases are investigated with extensive experiments. The empirical results with popular benchmark datasets demonstrate the superior performance of the proposed method.
翻译:随着神经网络预测的不确定性得到越来越多的关注,我们在这项工作中专注于衡量用于节点分类的图神经网络(GNNs)的不确定性。大多数现有的 GNNs 模拟节点之间的信息传递,其中消息通常是确定性的。自然会出现问题:这些消息中是否存在不确定性?我们如何将这样的不确定性与消息一起在图上传播?为了解决这些问题,我们提出了一种贝叶斯不确定性传播(BUP)方法,该方法将 GNNs 嵌入贝叶斯建模框架中,并使用预测概率的贝叶斯置信度和消息的不确定性来建模节点分类的预测不确定性。我们的方法提出了受高斯模型启发的新型不确定性传播机制。此外,我们提出了一种面向不确定性的节点分类损失函数,允许 GNN 明确地将预测不确定性集成到学习过程中。因此,具有高预测不确定性的训练示例将受到惩罚。我们展示了 BUP 在预测可靠性和超出分布的(OOD)预测方面的应用。还深入分析了学习的不确定性。通过大量实验,研究了不确定性和图拓扑之间的关系,以及 OOD 情况下的预测不确定性。流行的基准数据集的实证结果证明了该方法的优越性。