Graph Neural Networks (GNNs) are widely used for graph representation learning. Despite its prevalence, GNN suffers from two drawbacks in the graph classification task, the neglect of graph-level relationships, and the generalization issue. Each graph is treated separately in GNN message passing/graph pooling, and existing methods to address overfitting operate on each individual graph. This makes the graph representations learnt less effective in the downstream classification. In this paper, we propose a Class-Aware Representation rEfinement (CARE) framework for the task of graph classification. CARE computes simple yet powerful class representations and injects them to steer the learning of graph representations towards better class separability. CARE is a plug-and-play framework that is highly flexible and able to incorporate arbitrary GNN backbones without significantly increasing the computational cost. We also theoretically prove that CARE has a better generalization upper bound than its GNN backbone through Vapnik-Chervonenkis (VC) dimension analysis. Our extensive experiments with 10 well-known GNN backbones on 9 benchmark datasets validate the superiority and effectiveness of CARE over its GNN counterparts.
翻译:图表神经网络(GNNs)被广泛用于图形演示学习。尽管它很普遍,但GNN在图形分类任务中有两个缺点,即忽略了图形级关系,以及一般性问题。每个图表在GNN电文传递/绘图集合中分别处理,每个图表的操作功能也用现有方法处理过度操作。这使图显示在下游分类中变得不那么有效。在本文中,我们建议为图表分类任务建立一个“级-Aware 演示(CARE)框架 ” 。CARE 计算简单而强大的类展示,并把它们用于指导图形表达的学习,使其更好地分类分离。CARE是一个非常灵活的插接连框架,能够在不大幅提高计算成本的情况下纳入任意的GNN骨架。我们还从理论上证明,CARE通过Vapnik-Chervonenkis(VC)进行比其GNN骨架要高得多。我们在9个基准数据集上与10个众所周知的GNNE进行的广泛实验。