Augmented graphs play a vital role in regularizing Graph Neural Networks (GNNs), which leverage information exchange along edges in graphs, in the form of message passing, for learning. Due to their effectiveness, simple edge and node manipulations (e.g., addition and deletion) have been widely used in graph augmentation. Nevertheless, such common augmentation techniques can dramatically change the semantics of the original graph, causing overaggressive augmentation and thus under-fitting in the GNN learning. To address this problem arising from dropping or adding graph edges and nodes, we propose SoftEdge, which assigns random weights to a portion of the edges of a given graph for augmentation. The synthetic graph generated by SoftEdge maintains the same nodes and their connectivities as the original graph, thus mitigating the semantic changes of the original graph. We empirically show that this simple method obtains superior accuracy to popular node and edge manipulation approaches and notable resilience to the accuracy degradation with the GNN depth.
翻译:强化图形在将图形神经网络(GNNs)正规化方面发挥着关键作用,因为图形神经网络(GNNS)能够以传递电文的形式在图表边缘进行信息交流,以便学习。由于其有效性,在图形扩增中广泛使用了简单的边缘和节点操纵(例如增删)。然而,这种常见的增强技术可以显著改变原始图形的语义,导致过度侵略性增强,从而在GNN学习中不完全适应。为了解决由于投放或添加图形边缘和节点而产生的这一问题,我们提议SoftEdge, 将随机权重分配给特定图形边缘的一部分用于增强。SoftEdge生成的合成图形与原始图形保持相同的节点及其连接,从而缓解原始图形的语义变化。我们从经验中表明,这一简单方法在流行节点和边缘操纵方法方面获得了更高的精度,并显著地适应了GNNE深度的精确度退化。