Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications. However, the enormous size of large-scale graphs hinders their applications under real-time inference scenarios. Although existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure, these methods still suffer from scalability issues when making inferences on unseen nodes, as the feature preprocessing requires the graph is known and fixed. To speed up the inference in the inductive setting, we propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information. This could successfully avoid the redundant computation of feature propagation. Moreover, the trade-off between accuracy and inference latency can be flexibly controlled by simple hyper-parameters to match different latency constraints of application scenarios. To compensate for the potential inference accuracy loss, we further propose Inception Distillation to exploit the multi scale reception information and improve the inference performance. Extensive experiments are conducted on four public datasets with different scales and characteristics, and the experimental results show that our proposed inference acceleration framework outperforms the SOTA graph inference acceleration baselines in terms of both accuracy and efficiency. In particular, the advantage of our proposed method is more significant on larger-scale datasets, and our framework achieves $75\times$ inference speedup on the largest Ogbn-products dataset.
翻译:虽然现有的可缩缩的GNNS利用线性传播来预处理功能,加速培训和推导程序,但这些方法在对未知节点作出推论时仍然会遇到可缩放问题,因为特征预处理需要图表,因此,要弥补潜在的推导准确性损失,我们进一步建议,为了加快推导设置中的推论,我们建议采用新的适应性传播顺序方法,根据每个节点的地形速度生成个性化传播顺序。这可以成功地避免对地貌传播的冗余计算。此外,精确性和推导拉力之间的权衡可以通过简单的超度计来灵活控制,以适应应用情景中不同的悬浮性限制。为了弥补潜在的推导性准确性损失,我们进一步建议,为了利用多级接收信息并改进推导性性绩效,我们提议在四个公共数据设置上进行广泛的实验,以不同规模和最大比例的精确性传播顺序计算。我们提议的精确度框架的精确度和精确度框架的精确度之间的精确度比值比值和实验性结果显示我们拟议的精确度框架的精确度比值比值比值比值框架。