Despite Graph Neural Networks (GNNs) have achieved remarkable accuracy, whether the results are trustworthy is still unexplored. Previous studies suggest that many modern neural networks are over-confident on the predictions, however, surprisingly, we discover that GNNs are primarily in the opposite direction, i.e., GNNs are under-confident. Therefore, the confidence calibration for GNNs is highly desired. In this paper, we propose a novel trustworthy GNN model by designing a topology-aware post-hoc calibration function. Specifically, we first verify that the confidence distribution in a graph has homophily property, and this finding inspires us to design a calibration GNN model (CaGCN) to learn the calibration function. CaGCN is able to obtain a unique transformation from logits of GNNs to the calibrated confidence for each node, meanwhile, such transformation is able to preserve the order between classes, satisfying the accuracy-preserving property. Moreover, we apply the calibration GNN to self-training framework, showing that more trustworthy pseudo labels can be obtained with the calibrated confidence and further improve the performance. Extensive experiments demonstrate the effectiveness of our proposed model in terms of both calibration and accuracy.
翻译:尽管图形神经网络(GNNs)已经取得了惊人的准确性,结果是否值得信任,还是尚未探索。先前的研究表明,许多现代神经网络对预测过于自信,然而,令人惊讶的是,我们发现GNNs主要朝着相反的方向发展,即GNNs不够自信。因此,GNNs的可信度校准非常理想。在本文件中,我们通过设计一个表层学觉悟的热后校准功能,提出了一个新的值得信赖的GNN模型。具体地说,我们首先核查图中的信任分布具有同质属性,而这一发现激励我们设计一个校准GNN(CAGCN)模型来学习校准功能。CAGNC能够从GNs的日志上获得独特的转变,到每个节点的校准信任度。与此同时,这种转变能够维护各年级之间的秩序,满足准确性能。此外,我们将GNNN用于自我训练框架,表明在校准性测试和进一步提高性能方面,可以取得更可靠的假标签。