Graph Neural Networks (GNNs) have demonstrated a great potential in a variety of graph-based applications, such as recommender systems, drug discovery, and object recognition. Nevertheless, resource-efficient GNN learning is a rarely explored topic despite its many benefits for edge computing and Internet of Things (IoT) applications. To improve this state of affairs, this work proposes efficient subgraph-level training via resource-aware graph partitioning (SUGAR). SUGAR first partitions the initial graph into a set of disjoint subgraphs and then performs local training at the subgraph-level. We provide a theoretical analysis and conduct extensive experiments on five graph benchmarks to verify its efficacy in practice. Our results show that SUGAR can achieve up to 33 times runtime speedup and 3.8 times memory reduction on large-scale graphs. We believe SUGAR opens a new research direction towards developing GNN methods that are resource-efficient, hence suitable for IoT deployment.
翻译:神经网络图(GNNs)在各种基于图表的应用中显示出巨大的潜力,例如建议系统、药物发现和物体识别等。然而,资源效率高的GNN学习尽管在边缘计算和物联网应用方面有许多好处,但却是一个很少探讨的专题。为了改善这种状况,这项工作提议通过资源观测图分区法(SUGAR)进行有效的子系统培训。SUGAR首先将最初的图表分成一组脱节的子集,然后在子图一级进行当地培训。我们提供了理论分析,对五个图表基准进行了广泛的实验,以核实其实际效果。我们的结果显示,SUGAR可以达到33个运行速度速度和3.8倍大型图的记忆减少。我们相信SUGAR为开发资源效率高、因此适合IOT部署的GNN方法开辟了新的研究方向。