In recent years, a plethora of spectral graph neural networks (GNN) methods have utilized polynomial basis with learnable coefficients to achieve top-tier performances on many node-level tasks. Although various kinds of polynomial bases have been explored, each such method adopts a fixed polynomial basis which might not be the optimal choice for the given graph. Besides, we identify the so-called over-passing issue of these methods and show that it is somewhat rooted in their less-principled regularization strategy and unnormalized basis. In this paper, we make the first attempts to address these two issues. Leveraging Jacobi polynomials, we design a novel spectral GNN, LON-GNN, with Learnable OrthoNormal bases and prove that regularizing coefficients becomes equivalent to regularizing the norm of learned filter function now. We conduct extensive experiments on diverse graph datasets to evaluate the fitting and generalization capability of LON-GNN, where the results imply its superiority.
翻译:近年来,大量的谱图卷积神经网络(GNN)方法采用了多项式基和可学习系数,取得了许多节点级任务的最佳表现。虽然已经探索了许多种多项式基,但每种方法都采用了一种固定的多项式基,可能不是给定图的最佳选择。此外,我们还发现了这些方法的所谓过决策问题,并表明该问题在它们不太有原则的正则化策略和非规格化基础上有一定根源。在本文中,我们首次尝试解决这两个问题。利用雅各比多项式,我们设计了一种新的谱GNN,LON-GNN,具有可学习的正交规范基,并证明了正则化系数等价于正则化现在学习到的滤波函数的范数。我们对各种图数据集进行了广泛的实验,以评估LON-GNN的拟合和泛化能力,结果表明其优越性。