Recently, graph-based models designed for downstream tasks have significantly advanced research on graph neural networks (GNNs). GNN baselines based on neural message-passing mechanisms such as GCN and GAT perform worse as the network deepens. Therefore, numerous GNN variants have been proposed to tackle this performance degradation problem, including many deep GNNs. However, a unified framework is still lacking to connect these existing models and interpret their effectiveness at a high level. In this work, we focus on deep GNNs and propose a novel view for understanding them. We establish a theoretical framework via inference on a probabilistic graphical model. Given the fixed point equation (FPE) derived from the variational inference on the Markov random fields, the deep GNNs, including JKNet, GCNII, DGCN, and the classical GNNs, such as GCN, GAT, and APPNP, can be regarded as different approximations of the FPE. Moreover, given this framework, more accurate approximations of FPE are brought, guiding us to design a more powerful GNN: coupling graph neural network (CoGNet). Extensive experiments are carried out on citation networks and natural language processing downstream tasks. The results demonstrate that the CoGNet outperforms the SOTA models.
翻译:最近,为下游任务设计的基于图表的模型对图形神经网络(GNNs)进行了相当先进的研究。GNN基线基于神经信息传递机制,如GCN和GAT等,随着网络的深化,其效果更差。因此,提出了许多GNN变量,以解决这一性能退化问题,包括许多深层GNNs。然而,仍然缺乏一个统一框架将这些现有模型联系起来,并在高层次解释其有效性。在这项工作中,我们侧重于深层次的GNNS,并提出理解这些模型的新观点。我们通过推断概率图形模型,建立了一个理论框架。我们根据马尔科夫随机域的变异推断得出的固定点方程式(FPE),深度GNNNS,包括JKNet、GCNII、GNGNN,以及古典的GNNN,如GCN、GAT和APNP,可以被视为FPE的不同近似点。此外,根据这一框架,我们提出了FPE的更准确的近比,指导我们设计一个更强大的GNNNE:图象性神经网络网络外加图象网络(COGNet),展示了SOBROGMLAMLAMLA ROGMLA/CATIRODLA)。