The graph neural network (GNN) has demonstrated its superior performance in various applications. The working mechanism behind it, however, remains mysterious. GNN models are designed to learn effective representations for graph-structured data, which intrinsically coincides with the principle of graph signal denoising (GSD). Algorithm unrolling, a "learning to optimize" technique, has gained increasing attention due to its prospects in building efficient and interpretable neural network architectures. In this paper, we introduce a class of unrolled networks built based on truncated optimization algorithms (e.g., gradient descent and proximal gradient descent) for GSD problems. They are shown to be tightly connected to many popular GNN models in that the forward propagations in these GNNs are in fact unrolled networks serving specific GSDs. Besides, the training process of a GNN model can be seen as solving a bilevel optimization problem with a GSD problem at the lower level. Such a connection brings a fresh view of GNNs, as we could try to understand their practical capabilities from their GSD counterparts, and it can also motivate designing new GNN models. Based on the algorithm unrolling perspective, an expressive model named UGDGNN, i.e., unrolled gradient descent GNN, is further proposed which inherits appealing theoretical properties. Extensive numerical simulations on seven benchmark datasets demonstrate that UGDGNN can achieve superior or competitive performance over the state-of-the-art models.
翻译:图形神经网络(GNN)在各种应用中表现优异。 但是,它背后的工作机制仍然神秘。 GNN模型的设计是为了学习图形结构数据的有效表达方式,而图形结构数据与图形信号去除原则(GSD)有着内在的相似性。 Agorithm 自动滚动(Agorithm unrolling),这是一种“学习优化”技术,因其在建设高效和可解释的神经网络结构方面的前景而日益受到关注。在本文中,我们引入了一类基于快速优化算法(例如梯度下移和近度梯度梯度下降)建立起来的未滚动网络,用于解决GNNN数据问题。这样的连接可以给GNN数据系统带来新鲜的视野,因为我们可以试图从他们的GSD对应方了解其实际能力(例如梯度下移和近度梯度梯度梯度下降 ) 。这些模型与许多广受欢迎的GNNNN数据模型紧密连接起来,因为这些GNNNS的前沿传播事实上的模型可以进一步展示新的GG数据滚动。