Due to the over-smoothing issue, most existing graph neural networks can only capture limited dependencies with their inherently finite aggregation layers. To overcome this limitation, we propose a new kind of graph convolution, called Graph Implicit Nonlinear Diffusion (GIND), which implicitly has access to infinite hops of neighbors while adaptively aggregating features with nonlinear diffusion to prevent over-smoothing. Notably, we show that the learned representation can be formalized as the minimizer of an explicit convex optimization objective. With this property, we can theoretically characterize the equilibrium of our GIND from an optimization perspective. More interestingly, we can induce new structural variants by modifying the corresponding optimization objective. To be specific, we can embed prior properties to the equilibrium, as well as introducing skip connections to promote training stability. Extensive experiments show that GIND is good at capturing long-range dependencies, and performs well on both homophilic and heterophilic graphs with nonlinear diffusion. Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well. As a result, our GIND obtains significant improvements on both node-level and graph-level tasks.
翻译:由于过度移动的问题,大多数现有的图形神经网络只能捕捉有限的依赖性,而它们本身的集合层是有限的。为了克服这一限制,我们提议了一种新的图形演化,称为“图形内隐不线性扩散”(GIND),它隐含地能够接触到邻居的无限跳跃,同时通过适应性地集成非线性扩散的特征,以防止过度移动。值得注意的是,我们表明,所学到的代表性可以正式化,成为明确连接优化目标的最小化。有了这个属性,我们可以从理论上从优化的角度来描述我们GIND的平衡。更有意思的是,我们可以通过修改相应的优化目标来产生新的结构变异。具体地说,我们可以将先前的属性嵌入平衡,并引入跳过连接以促进培训稳定性。广泛的实验表明,GIND在捕捉长距离依赖性方面是好的,并且以非线性扩散的同种和异种和异种性的图形进行演练。此外,我们显示,我们模型的优化诱导变种能够提高性能,提高培训的稳定性和效率。作为图表的结果,我们的GIND没有显著的改进。