The robustness of the much-used Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance. In this paper, the random GCN is introduced for which a random matrix theory analysis is possible. This analysis suggests that if the graph is sufficiently perturbed, or in the extreme case random, then the GCN fails to benefit from the node features. It is furthermore observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem. An empirical study of a GCN utilised for node classification on six real datasets further confirms the theoretical findings and demonstrates that perturbations of the graph structure can result in GCNs performing significantly worse than Multi-Layer Perceptrons run on the node features alone. In practice, adding a node feature kernel to the message passing of perturbed graphs results in a significant improvement of the GCN's performance, thereby rendering it more robust to graph perturbations. Our code is publicly available at:https://github.com/ChangminWu/RobustGCN.
翻译:大量使用的图表革命网络(GCNs)在干扰其投入方面的坚固性正在成为一个日益重要的主题。在本文中,随机的GCN被引入,因此有可能进行随机矩阵理论分析。这一分析表明,如果图形受到足够扰动,或者在极端情况下,GCN没有从节点特性中受益,那么GCN就无法从这些节点特性中受益。还观察到,通过在图形结构的对称矩阵中添加节点特征内核,加强GCN传递信息的步骤,解决了这一问题。对六种真实数据集的节点分类使用的GCN的实证研究进一步证实了理论结论,并表明图形结构的扰动可能导致GCNs的表现比仅在节点特征上运行的多-Layer Perceptron严重得多。在实践中,在扰动图传递信息上添加节点内核内核,导致GCN的性能显著改善,从而使其对图形穿透情况更加稳健。我们的代码可以公开查阅:https://giusubrubrub.com。