Graph Neural Networks (GNNs) aim at integrating node contents with graph structure to learn nodes/graph representations. Nevertheless, it is found that most of existing GNNs do not work well on data with high heterophily level that accounts for a large proportion of edges between different class labels. Recently, many efforts to tackle this problem focus on optimizing the way of feature learning. From another angle, this work aims at mitigating the negative impacts of heterophily by optimizing graph structure for the first time. Specifically, on assumption that graph smoothing along heterophilious edges can hurt prediction performance, we propose a structure learning method called LHE to identify heterophilious edges to drop. A big advantage of this solution is that it can boost GNNs without careful modification of feature learning strategy. Extensive experiments demonstrate the remarkable performance improvement of GNNs with \emph{LHE} on multiple datasets across full spectrum of homophily level.
翻译:神经网络图(GNNs)旨在将节点内容与图表结构结合起来,以学习节点/绘图表达方式。 然而,发现大多数现有的GNNs在高偏差水平的数据上效果不佳,这些数据占不同类标签之间边缘的很大比例。 最近,许多解决这一问题的努力侧重于优化特征学习方式。 从另一个角度看,这项工作旨在首次通过优化图形结构来减轻异常现象的负面影响。具体地说,假设沿着异常邪恶边缘平滑的图形会损害预测性能,我们建议一种称为LHE的结构学习方法,以识别要下降的异常边缘。这个方法的一大优点是,它可以在不仔细修改特征学习战略的情况下提升GNNs。广泛的实验表明,在全同质层次的多个数据集上,GNNs与 emph{LHE} 的功能显著改善。