Stability is an important property of graph neural networks (GNNs) which explains their success in many problems of practical interest. Existing GNN stability results depend on the size of the graph, restricting applicability to graphs of moderate size. To understand the stability properties of GNNs on large graphs, we define manifold convolutions and consider neural networks supported on manifolds. These are defined in terms of manifold diffusions mediated by the Laplace-Beltrami (LB) operator and are interpreted as limits of GNNs running on graphs of growing size. We define manifold deformations and show that they lead to perturbations of the manifold's LB operator that consist of an absolute and a relative perturbation term. We then define two frequency dependent manifold filters that split the infinite dimensional spectrum of the LB operator in finite partitions, and prove that these filters are stable to absolute and relative perturbations of the LB operator respectively. We also observe a trade-off between the stability and the discriminability from the stability bounds. Moreover, manifold neural networks (MNNs) composed of these filters inherit the stability properties while the nonlinear activation function helps to improve the discriminability. Therefore, the MNNs can be both stable and discriminative. We verify our results numerically in shape classification with point cloud datasets.
翻译:图形神经网络(GNNs)具有重要的稳定性特性,可以解释它们在许多实际问题中的成功程度。现有的GNNs稳定性结果取决于图形大小,限制对中等尺寸图形的适用性。为了理解大图表中GNNs稳定性的特性,我们定义了多重演进,并考虑了在多块块上支持神经网络。这些是用Laplace-Beltrami(LB)操作员介质的多重扩散来定义的,并被解释为在不断增大的大小图上运行的GNNS的界限。我们定义了多重变形,并表明这些变形导致由绝对和相对扰动术语组成的MNB操作器操作器的扰动性。我们随后定义了两个频率依赖的多块过滤器,将LB操作员的无限维度频谱分割为有限分区,并证明这些过滤器分别稳定到LB操作员的绝对性和相对扰动性。我们还观察了稳定性与稳定线之间的交易。此外,由绝对性神经网络(MNNNUS) 和稳定性数据序列中构成的稳定性,我们可以保证我们的数据稳定性。