It has been observed that graph neural networks (GNN) sometimes struggle to maintain a healthy balance between the efficient modeling long-range dependencies across nodes while avoiding unintended consequences such oversmoothed node representations or sensitivity to spurious edges. To address this issue (among other things), two separate strategies have recently been proposed, namely implicit and unfolded GNNs. The former treats node representations as the fixed points of a deep equilibrium model that can efficiently facilitate arbitrary implicit propagation across the graph with a fixed memory footprint. In contrast, the latter involves treating graph propagation as unfolded descent iterations as applied to some graph-regularized energy function. While motivated differently, in this paper we carefully quantify explicit situations where the solutions they produce are equivalent and others where their properties sharply diverge. This includes the analysis of convergence, representational capacity, and interpretability. In support of this analysis, we also provide empirical head-to-head comparisons across multiple synthetic and public real-world node classification benchmarks. These results indicate that while IGNN is substantially more memory-efficient, UGNN models support unique, integrated graph attention mechanisms and propagation rules that can achieve SOTA node classification accuracy across disparate regimes such as adversarially-perturbed graphs, graphs with heterophily, and graphs involving long-range dependencies.
翻译:人们注意到,图形神经网络有时在节点之间有效模拟长距离依赖性的长期依赖性之间难以保持健康平衡,同时避免意外后果,例如过度移动的节点表示或对虚假边缘的敏感度。为了解决这一问题(除其他外),最近提出了两个不同的战略,即隐含和展示的GNNS。前者将节点表示作为深平衡模型的固定点,这可以有效地促进在图中以固定的记忆足迹进行任意的隐含传播。相比之下,后者涉及将图表传播作为显示的下行重复处理,以适用于某些图表正规化的能源功能。虽然动机不同,但在本文件中,我们仔细量化了明确的情况,在这些情况下,它们产生的解决方案是等效的,而其他的特性则截然不同。这包括对趋同性、代表性能力和可解释性的分析。为支持这一分析,我们还提供了多种合成和公共现实世界节点分类基准之间的实证头对头比较。这些结果表明,虽然IGNNNE基本上不具有更高的记忆效率,但UGNNN模型支持一些独特的图形关注机制,而动态平面图则可以实现不同的直径直径直方图。