Graph Neural Networks (GNNs) rely on the graph structure to define an aggregation strategy where each node updates its representation by combining information from its neighbours. A known limitation of GNNs is that, as the number of layers increases, information gets smoothed and squashed and node embeddings become indistinguishable, negatively affecting performance. Therefore, practical GNN models employ few layers and only leverage the graph structure in terms of limited, small neighbourhoods around each node. Inevitably, practical GNNs do not capture information depending on the global structure of the graph. While there have been several works studying the limitations and expressivity of GNNs, the question of whether practical applications on graph structured data require global structural knowledge or not, remains unanswered. In this work, we empirically address this question by giving access to global information to several GNN models, and observing the impact it has on downstream performance. Our results show that global information can in fact provide significant benefits for common graph-related tasks. We further identify a novel regularization strategy that leads to an average accuracy improvement of more than 5% on all considered tasks.
翻译:神经网络图(GNNs)依靠图形结构来界定一个汇总战略,每个节点都通过合并邻国提供的信息来更新其代表性。GNNs已知的局限性是,随着层层数的增加,信息会变得平滑,被压扁,结点嵌入会变得无法区分,从而对性能产生不利影响。因此,实用的GNN模式只使用几层,而只能从每个节点周围的有限小街区的角度来利用图形结构。不可避免的是,实用的GNNs不会根据图表的全球结构来获取信息。虽然已经研究了一些GNs的局限性和直观性,但是关于图形结构数据的实际应用是否需要全球结构知识的问题仍然没有答案。在这项工作中,我们通过将全球信息提供给若干GNN模式,观察其对下游业绩的影响,从实践中处理这个问题。我们的结果表明,全球信息实际上可以为共同的图形相关任务带来重大的好处。我们进一步确定了一个新的规范化战略,它导致所有考虑的任务的平均精确度提高5%以上。