Graph neural networks have become one of the most important techniques to solve machine learning problems on graph-structured data. Recent work on vertex classification proposed deep and distributed learning models to achieve high performance and scalability. However, we find that the feature vectors of benchmark datasets are already quite informative for the classification task, and the graph structure only provides a means to denoise the data. In this paper, we develop a theoretical framework based on graph signal processing for analyzing graph neural networks. Our results indicate that graph neural networks only perform low-pass filtering on feature vectors and do not have the non-linear manifold learning property. We further investigate their resilience to feature noise and propose some insights on GCN-based graph neural network design.
翻译:图表神经网络已成为解决图形结构数据中机器学习问题的最重要技术之一。最近关于顶点分类的工作提出了深度和分布式学习模型,以达到高性能和可缩放性。然而,我们发现基准数据集的特性矢量对于分类任务来说已经相当丰富,而图形结构仅提供了一种使数据隐蔽的手段。在本文中,我们根据图形信号处理方法开发了一个理论框架,用于分析图形神经网络。我们的结果表明,图形神经网络只对特性矢量进行低空过滤,没有非线性多功能学习属性。我们进一步调查其特性的复原力,并就基于GCN的图形神经网络设计提出一些见解。