Many neural networks for graphs are based on the graph convolution operator, proposed more than a decade ago. Since then, many alternative definitions have been proposed, that tend to add complexity (and non-linearity) to the model. In this paper, we follow the opposite direction by proposing simple graph convolution operators, that can be implemented in single-layer graph convolutional networks. We show that our convolution operators are more theoretically grounded than many proposals in literature, and exhibit state-of-the-art predictive performance on the considered benchmark datasets.
翻译:许多图象神经网络以十多年前提出的图象变换操作器为基础。 从那时以来,提出了许多替代定义,这些定义往往增加了模型的复杂性(和非线性 ) 。 在本文中,我们采用相反的方向,提出简单的图象变换操作器,可以在单层图象变换网络中实施。我们显示,我们的图象变换操作器在理论上比文献中的许多提议更有依据,在考虑的基准数据集上表现出最先进的预测性能。