Graph neural networks (GNN) have been ubiquitous in graph learning tasks such as node classification. Most of GNN methods update the node embedding iteratively by aggregating its neighbors' information. However, they often suffer from negative disturbance, due to edges connecting nodes with different labels. One approach to alleviate this negative disturbance is to use attention, but current attention always considers feature similarity and suffers from the lack of supervision. In this paper, we consider the label dependency of graph nodes and propose a decoupling attention mechanism to learn both hard and soft attention. The hard attention is learned on labels for a refined graph structure with fewer inter-class edges. Its purpose is to reduce the aggregation's negative disturbance. The soft attention is learned on features maximizing the information gain by message passing over better graph structures. Moreover, the learned attention guides the label propagation and the feature propagation. Extensive experiments are performed on five well-known benchmark graph datasets to verify the effectiveness of the proposed method.
翻译:图形神经网络( GNN) 在节点分类等图形学习任务中普遍存在。 GNN 方法大多通过集聚邻居的信息来更新节点的迭代嵌入。 但是,由于连接不同标签的节点的边缘,这些节点往往受到负面干扰。 缓解这种负面扰动的一种方法是使用注意力, 但当前关注总是认为其特征相似, 并缺乏监督。 在本文中, 我们考虑图形节点的标签依赖性, 并提议一个脱钩关注机制, 以学习硬和软的注意力。 很难注意的是精细的图表结构的标签, 其用在分类间边缘上。 它的目的是减少集合的负面扰动。 软关注的焦点在于通过传递更好的图形结构信息而最大限度地增加信息收益的特征。 此外, 所学到的注意引导着标签传播和特征传播。 在五个广为人知的基准图形数据集上进行了广泛的实验, 以核实拟议方法的有效性 。