Designing spectral convolutional networks is a challenging problem in graph learning. ChebNet, one of the early attempts, approximates the spectral convolution using Chebyshev polynomials. GCN simplifies ChebNet by utilizing only the first two Chebyshev polynomials while still outperforming it on real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral convolution. Such conclusions are counter-intuitive in the field of approximation theory, where it is established that the Chebyshev polynomial achieves the optimum convergent rate for approximating a function. In this paper, we revisit the problem of approximating the spectral convolution with Chebyshev polynomials. We show that ChebNet's inferior performance is primarily due to illegal coefficients learnt by ChebNet approximating analytic filter functions, which leads to over-fitting. We then propose ChebNetII, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshev polynomial approximation while reducing the Runge phenomena. We conducted an extensive experimental study to demonstrate that ChebNetII can learn arbitrary graph spectrum filters and achieve superior performance in both full- and semi-supervised node classification tasks.
翻译:在图形学习中,设计光谱共变网络是一个具有挑战性的问题。ChebNet(早期尝试之一)利用 Chebyshev 多元纳米学类来接近光谱共变。GCN 简化ChebNet,仅使用前两个Chebyshev 多元合成学,但仍在现实世界数据集上表现优于前者。GPR-GNNN和BernNet(BernNet)表明,单式和伯恩斯坦基在学习光谱共变学方面也比Chebyshev 基础高。这些结论在近似理论领域是反直观的,其中确定Chebyshev 多元纳米学类达到类似功能的最佳趋同率。在本文中,我们重新审视了与Chebyshev 多元数学类相比光谱相近的问题。我们显示,ChebNet的低性能主要归因于ChebNet 相近于光谱分析学的不合法系数,这会导致过度适应。我们然后提议Chebshev 超级网络(ChebnetII) 实现一个全新的高级性能研究,同时进行一个基础的SENCROGNCREDSU化模型。