The recent years we have seen the rise of graph neural networks for prediction tasks on graphs. One of the dominant architectures is graph attention due to its ability to make predictions using weighted edge features and not only node features. In this paper we analyze, theoretically and empirically, graph attention networks and their ability of correctly labelling nodes in a classic classification task. More specifically, we study the performance of graph attention on the classic contextual stochastic block model (CSBM). In CSBM the nodes and edge features are obtained from a mixture of Gaussians and the edges from a stochastic block model. We consider a general graph attention mechanism that takes random edge features as input to determine the attention coefficients. We study two cases, in the first one, when the edge features are noisy, we prove that the majority of the attention coefficients are up to a constant uniform. This allows us to prove that graph attention with edge features is not better than simple graph convolution for achieving perfect node classification. Second, we prove that when the edge features are clean graph attention can distinguish intra- from inter-edges and this makes graph attention better than classic graph convolution.
翻译:近年来,我们看到了图形神经网络在图形上预测任务的上升。 主导结构之一是图形关注, 因为它能够使用加权边缘特征进行预测, 而不仅仅是节点特征。 在本文中, 我们从理论上和实验上分析图形关注网络及其在经典分类任务中正确标注节点的能力。 更具体地说, 我们研究经典背景随机区块模型( CSBM) 的图形关注性能。 在 CSBM 中, 节点和边缘特征来自高斯人和透镜区块模型边缘的混合体。 我们认为, 将随机边缘特征作为输入来确定注意系数的一般图形关注机制。 我们在第一个案例中研究了两个案例, 当边缘特征是吵闹时, 我们证明大部分注意系数是恒定不变的。 这让我们证明, 与边缘特征相比简单的图形递增图案的注意并不比实现完美节点分类要好。 其次, 我们证明, 当边缘特征是清晰的图形关注时, 能够区分内部和边缘, 从而使得图形关注比典型的图变好。