From the original theoretically well-defined spectral graph convolution to the subsequent spatial bassed message-passing model, spatial locality (in vertex domain) acts as a fundamental principle of most graph neural networks (GNNs). In the spectral graph convolution, the filter is approximated by polynomials, where a $k$-order polynomial covers $k$-hop neighbors. In the message-passing, various definitions of neighbors used in aggregations are actually an extensive exploration of the spatial locality information. For learning node representations, the topological distance seems necessary since it characterizes the basic relations between nodes. However, for learning representations of the entire graphs, is it still necessary to hold? In this work, we show that such a principle is not necessary, it hinders most existing GNNs from efficiently encoding graph structures. By removing it, as well as the limitation of polynomial filters, the resulting new architecture significantly boosts performance on learning graph representations. We also study the effects of graph spectrum on signals and interpret various existing improvements as different spectrum smoothing techniques. It serves as a spatial understanding that quantitatively measures the effects of the spectrum to input signals in comparison to the well-known spectral understanding as high/low-pass filters. More importantly, it sheds the light on developing powerful graph representation models.
翻译:从最初的理论上定义明确的光谱图向随后的空间低音传递模式演变,空间位置(在顶端域)是大多数图形神经网络(GNNS)的一项基本原则。在光谱图演化中,过滤器被多面体相近,在多面体中,一个$-k$顺序的多面体覆盖$k$-hop的邻居。在信息传递中,在聚合中使用的邻居的各种定义实际上是对空间地点信息的广泛探索。为了学习节点表征,表面距离似乎是必要的,因为它是各节点之间基本关系的特征。然而,对于学习整个图形的表达方式来说,仍然有必要坚持吗?在这项工作中,我们表明这样一个原则是不必要的,它阻碍大多数现有的GNNS的有效编码图形结构。通过去除它,以及多面层过滤器的局限性,导致新的结构大大提升了学习图形表达方式的性能。我们还研究了图形频谱对信号的影响,并解释了不同频谱平滑技术中的各种现有改进。对于整个图形的基本关系来说,它仍然是必须坚持的;在这项工作中,我们表明,这种原则是,这种原则是不必要的,它是一种更深刻的,它是一种对高度测量度测量的比较,它作为对光谱度的测量模型的比较,它的一种理解,它作为一种了解,它作为一种较深的感光学的感光学的比较,它作为一种较深的感光学的感光学的比较。