Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs. They are presented here as generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters instead of banks of classical convolutional filters. Otherwise, GNNs operate as CNNs. Filters are composed with pointwise nonlinearities and stacked in layers. It is shown that GNN architectures exhibit equivariance to permutation and stability to graph deformations. These properties provide a measure of explanation respecting the good performance of GNNs that can be observed empirically. It is also shown that if graphs converge to a limit object, a graphon, GNNs converge to a corresponding limit object, a graphon neural network. This convergence justifies the transferability of GNNs across networks with different number of nodes.
翻译:神经网络图(GNNs)是图中支持的信号的信息处理结构。 这里显示的是, GNN 结构显示对图解变异性和稳定性的等同性。 这些属性为GNNs的良好性能提供了一定程度的解释, 以实验方式观测到的GNNs。 此外, 如果图形接近一个限制对象, 一个图形, GNNs会聚集到一个相应的限制对象, 一个图形神经网络。 这种趋同证明GNs可以在不同节点的网络中转移。