As real-world graphs expand in size, larger GNN models with billions of parameters are deployed. High parameter count in such models makes training and inference on graphs expensive and challenging. To reduce the computational and memory costs of GNNs, optimization methods such as pruning the redundant nodes and edges in input graphs have been commonly adopted. However, model compression, which directly targets the sparsification of model layers, has been mostly limited to traditional Deep Neural Networks (DNNs) used for tasks such as image classification and object detection. In this paper, we utilize two state-of-the-art model compression methods (1) train and prune and (2) sparse training for the sparsification of weight layers in GNNs. We evaluate and compare the efficiency of both methods in terms of accuracy, training sparsity, and training FLOPs on real-world graphs. Our experimental results show that on the ia-email, wiki-talk, and stackoverflow datasets for link prediction, sparse training with much lower training FLOPs achieves a comparable accuracy with the train and prune method. On the brain dataset for node classification, sparse training uses a lower number FLOPs (less than 1/7 FLOPs of train and prune method) and preserves a much better accuracy performance under extreme model sparsity.
翻译:随着真实世界图形的大小扩大,部署了具有数十亿参数的更大的GNN模型。在这类模型中,高参数计数使图表的培训和推断变得昂贵和具有挑战性。为了降低GNN的计算和记忆成本,通常采用优化方法,如调整输入图中的冗余节点和边缘。然而,直接针对模型层的宽度的模型压缩主要限于用于图像分类和物体探测等任务的传统深神经网络。在本文中,我们使用两种最先进的模型压缩方法 (1) 火车和普梅纳和(2) 为GNNNS的重量层加宽而进行稀少的培训。我们评估和比较了两种方法在精度、培训宽度和输入图上的效率。我们的实验结果显示,用于图像邮件、维基对话模型和堆叠流数据集,用于链接预测的实验结果显示,低得多的培训FLOP的精度和精度层层层层层层的训练方法,比FOP的低度和精度FPROD方法的精确性能。我们评估和比较了两种方法,在FOP的深度培训中,在FPROPD的精度方法下,在不精度培训中使用了一种最弱的深度的精度方法下, 。</s>