Dynamical systems minimizing an energy are ubiquitous in geometry and physics. We propose a novel framework for GNNs where we parametrize (and {\em learn}) an energy functional and then take the GNN equations to be the gradient flow of such energy. This approach allows to analyse the GNN evolution from a multi-particle perspective as learning attractive and repulsive forces in feature space via the positive and negative eigenvalues of a symmetric `channel-mixing' matrix. We conduct spectral analysis of the solutions and provide a better understanding of the role of the channel-mixing in (residual) graph convolutional models and of its ability to steer the diffusion away from over-smoothing. We perform thorough ablation studies corroborating our theory and show competitive performance of simple models on homophilic and heterophilic datasets.
翻译:将能量最小化的动态系统在几何学和物理学中无处不在。 我们为GNNs提出了一个新颖的框架,我们在这个框架中将能源功能(和 ~em learning}) 合成一个能源功能,然后将GNN方程式作为这种能源的梯度流。 这种方法可以从多粒子的角度分析GN的演化过程,通过对称“通道混合”矩阵的正和负等值在地貌空间中学习有吸引力和令人厌恶的力量。 我们对解决方案进行光谱分析,并更好地了解(再生)图集模型中频道混合的作用及其引导扩散远离过度悬浮的能力。 我们进行了彻底的反动研究,以证实我们的理论,并展示关于同源和异源哲学数据集的简单模型的竞争性表现。