Graph neural networks have achieved state-of-the-art accuracy for graph node classification. However, GNNs are difficult to scale to large graphs, for example frequently encountering out-of-memory errors on even moderate size graphs. Recent works have sought to address this problem using a two-stage approach, which first aggregates data along graph edges, then trains a classifier without using additional graph information. These methods can run on much larger graphs and are orders of magnitude faster than GNNs, but achieve lower classification accuracy. We propose a novel two-stage algorithm based on a simple but effective observation: we should first train a classifier then aggregate, rather than the other way around. We show our algorithm is faster and can handle larger graphs than existing two-stage algorithms, while achieving comparable or higher accuracy than popular GNNs. We also present a theoretical basis to explain our algorithm's improved accuracy, by giving a synthetic nonlinear dataset in which performing aggregation before classification actually decreases accuracy compared to doing classification alone, while our classify then aggregate approach substantially improves accuracy compared to classification alone.
翻译:图形神经网络在图形节点分类方面达到了最先进的精确度。 但是, GNN 很难对大图表进行缩放, 例如, 甚至在中等大小的图形上经常遇到模拟错误。 最近的工作试图用两个阶段的方法解决这个问题, 首先是在图形边缘上汇总数据, 然后在不使用额外的图形信息的情况下培训一个分类器。 这些方法可以在大得多的图形上运行, 其数量级比GNNs要快, 但分类精确度要低。 我们提议基于简单而有效的观察, 一个新的两阶段算法: 我们首先应该训练一个分类器, 然后综合起来, 而不是其他方式。 我们显示我们的算法比现有的两阶段算法更快, 可以处理更大的图表, 同时实现比流行的 GNNs 的相似或更高的精度。 我们还提出了一个理论基础来解释我们的算法的准确性。 我们给出了一个合成的非线性数据集, 在分类之前进行汇总, 与仅仅进行分类相比实际上降低了准确度, 而我们进行分类后的综合方法会大大提高精确度。