Graph Neural Networks (GNNs) are limited in their propagation operators. These operators often contain non-negative elements only and are shared across channels and layers, limiting the expressiveness of GNNs. Moreover, some GNNs suffer from over-smoothing, limiting their depth. On the other hand, Convolutional Neural Networks (CNNs) can learn diverse propagation filters, and phenomena like over-smoothing are typically not apparent in CNNs. In this paper, we bridge this gap by incorporating trainable channel-wise weighting factors $\omega$ to learn and mix multiple smoothing and sharpening propagation operators at each layer. Our generic method is called $\omega$GNN, and we study two variants: $\omega$GCN and $\omega$GAT. For $\omega$GCN, we theoretically analyse its behaviour and the impact of $\omega$ on the obtained node features. Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our $\omega$GCN and $\omega$GAT perform better or on par with state-of-the-art methods.
翻译:这些操作员通常只包含非负值元素,而且在不同渠道和层次上共享,从而限制GNN的表达性。此外,一些GNN的操作员患有过度吸附,深度也受到限制。另一方面,Culvaal-Neal网络(CNN)可以学习多种传播过滤器,而且超浮现象一般在CNN中并不明显。在本文中,我们通过纳入可训练的频道加权因素($@omega$)来弥补这一差距,以便学习和混合多个平滑和精锐的传播操作员。我们的通用方法称为$\omega$GNNN,我们研究两种变式:$\omega$GN和$GAT。对于$GN,我们从理论上分析其行为以及美元对获得的节点特征的影响。我们的实验证实了这些发现,展示并解释变异因素如何不过度超模。此外,我们用15个真实世界的G$GNNNNG 来进行实验,在诺格和州的图表中进行更精确的实验。