This paper focuses on spectral graph convolutional neural networks (ConvNets), where filters are defined as elementwise multiplication in the frequency domain of a graph. In machine learning settings where the dataset consists of signals defined on many different graphs, the trained ConvNet should generalize to signals on graphs unseen in the training set. It is thus important to transfer ConvNets between graphs. Transferability, which is a certain type of generalization capability, can be loosely defined as follows: if two graphs describe the same phenomenon, then a single filter or ConvNet should have similar repercussions on both graphs. This paper aims at debunking the common misconception that spectral filters are not transferable. We show that if two graphs discretize the same "continuous" space, then a spectral filter or ConvNet has approximately the same repercussion on both graphs. Our analysis is more permissive than the standard analysis. Transferability is typically described as the robustness of the filter to small graph perturbations and re-indexing of the vertices. Our analysis accounts also for large graph perturbations. We prove transferability between graphs that can have completely different dimensions and topologies, only requiring that both graphs discretize the same underlying space in some generic sense.
翻译:本文的焦点是光谱图图卷变神经网络(Convilal 神经网络 ), 过滤器被定义为图形频率域中的元素倍增。 在由许多不同图形定义的信号组成的数据集的机器学习设置中, 训练有素的ConvNet应该对培训集中未见的图形的信号进行概括化。 因此, 在图形之间传输ConvNet是非常重要的。 具有某种一般化能力的可转移性, 可以宽松地定义如下: 如果两个图形描述同一现象, 那么一个过滤器或ConvNet应该对两个图形产生类似的影响。 本文旨在消除常见的误解, 即光谱过滤器是不可转让的。 我们显示, 如果两个图形将相同的“ 连续” 空间分离出来, 那么光谱过滤器或ConvNet对两个图形的反射力都大致相同。 我们的分析比标准分析更灵活。 迁移能力通常被描述为过滤器对小图形的渗透力和对双向曲线的重新定位。 我们的分析账户也只能对大图形的直观性进行彻底的剖析。