Message-Passing Neural Networks (MPNNs), the most prominent Graph Neural Network (GNN) framework, celebrate much success in the analysis of graph-structured data. Concurrently, the sparsification of Neural Network models attracts a great amount of academic and industrial interest. In this paper we conduct a structured, empirical study of the effect of sparsification on the trainable part of MPNNs known as the Update step. To this end, we design a series of models to successively sparsify the linear transform in the Update step. Specifically, we propose the ExpanderGNN model with a tuneable sparsification rate and the Activation-Only GNN, which has no linear transform in the Update step. In agreement with a growing trend in the literature the sparsification paradigm is changed by initialising sparse neural network architectures rather than expensively sparsifying already trained architectures. Our novel benchmark models enable a better understanding of the influence of the Update step on model performance and outperform existing simplified benchmark models such as the Simple Graph Convolution. The ExpanderGNNs, and in some cases the Activation-Only models, achieve performance on par with their vanilla counterparts on several downstream tasks, while containing significantly fewer trainable parameters. Our code is publicly available at: https://github.com/ChangminWu/ExpanderGNN.
翻译:最突出的图形神经网络(GNN)框架(MPNN), 信息发送神经网络(MPNNS) 信息传递神经网络(MPNNS), 庆祝在分析图形结构数据方面取得巨大成功。 同时, 神经网络模型的扩展吸引了大量的学术和产业兴趣。 在本文中, 我们进行了结构化的经验性研究, 研究对MPNNS可培训部分的扩展的影响, 称为“ 更新步骤 ” 。 为此, 我们设计了一系列模型, 以相继扩展更新步骤的线性转变。 具体地说, 我们提议扩大GNNNNNNS模型, 配有可调频调的扩增速率和“ Only GNNNNS” 模型, 在更新步骤中没有线性转变。 与文献中日益增长的趋势一致, 扩增的范式网络结构, 而不是昂贵的节制结构。 我们的新基准模型能够更好地了解“更新”步骤对模型的影响, 超越现有简化的基准模型, 如“ 简单图表演算” 。 扩展GNNNNNNP, 和“ ”, 在一些“ 可操作” 运行中, 运行中, 在几个“ 运行” 中, 运行中, 运行中, 运行中, 有少量的“ 的“, 运行” 的“ 运行“ 的“ 的“ ” 工具” 的“ 的“ ” ” ” 的“ 的“ 运行” 运行式” 的“ 的“, 在” 的“ 运行”, 运行” 。