Polynomial filters, a kind of Graph Neural Networks, typically use a predetermined polynomial basis and learn the coefficients from the training data. It has been observed that the effectiveness of the model is highly dependent on the property of the polynomial basis. Consequently, two natural and fundamental questions arise: Can we learn a suitable polynomial basis from the training data? Can we determine the optimal polynomial basis for a given graph and node features? In this paper, we propose two spectral GNN models that provide positive answers to the questions posed above. First, inspired by Favard's Theorem, we propose the FavardGNN model, which learns a polynomial basis from the space of all possible orthonormal bases. Second, we examine the supposedly unsolvable definition of optimal polynomial basis from Wang & Zhang (2022) and propose a simple model, OptBasisGNN, which computes the optimal basis for a given graph structure and graph signal. Extensive experiments are conducted to demonstrate the effectiveness of our proposed models.
翻译:圆形过滤器,一种图形神经网络,通常使用预先确定的多元神经网络,并从培训数据中学习系数。观察到模型的有效性高度依赖多元基础的属性。因此,产生了两个自然和根本的问题:我们能否从培训数据中学习一个适当的多元基础?我们能否为某一图表和节点特征确定一个最佳的多元基础?在本文件中,我们提议两个光谱GNN模型,为上述问题提供积极的答案。首先,在Favard的理论启发下,我们提议FavardGNN模型,该模型从所有可能的异常基础的空隙中学习一个多元基础。第二,我们从Wang & Zhang(2022年)研究所谓的最佳多元基础的所谓无法解析的定义,并提出一个简单的模型,OptBasisGNN,该模型为给定的图形结构和图形信号提供最佳基础。我们进行了广泛的实验,以证明我们提议的模型的有效性。</s>