Graph Neural Networks (GNNs) have achieved a lot of success with graph-structured data. However, it is observed that the performance of GNNs does not improve (or even worsen) as the number of layers increases. This effect has known as over-smoothing, which means that the representations of the graph nodes of different classes would become indistinguishable when stacking multiple layers. In this work, we propose a new simple, and efficient method to alleviate the effect of the over-smoothing problem in GNNs by explicitly using relations between node embeddings. Experiments on real-world datasets demonstrate that utilizing node embedding relations makes GNN models such as Graph Attention Network more robust to over-smoothing and achieves better performance with deeper GNNs. Our method can be used in combination with other methods to give the best performance. GNN applications are endless and depend on the user's objective and the type of data that they possess. Solving over-smoothing issues can potentially improve the performance of models on all these tasks.
翻译:图表神经网络(GNNs)在图形结构化数据方面取得了许多成功。 然而,人们注意到,GNNs的性能并没有随着层数的增加而得到改善(甚至恶化 ) 。 这种效果被称为超移动,这意味着不同类别图形节点的表示在堆叠多层时将变得无法区分。 在这项工作中,我们提出了一个新的简单而有效的方法,通过明确使用节点嵌入关系来减轻GNS过度移动问题的影响。 真实世界数据集的实验表明,使用节点嵌入关系使GNNN模型(如图形注意网络)更加强大,能够超移动,并与更深层GNNS取得更好的性能。我们的方法可以与其他方法相结合,来提供最佳性能。 GNN应用程序是无休止的,取决于用户的目标和他们拥有的数据类型。 解决超移动问题可以改善所有这些任务模型的性能。