Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance. We tackle this problem with the kernel regression (KR) approach, using KR loss as the primary loss in self-supervised settings or as a regularization term in supervised settings. We show substantial performance improvements compared to state-of-the-art in both scenarios on multiple transductive and inductive node classification datasets, especially for deep networks. As opposed to mutual information (MI), KR loss is convex and easy to estimate in high-dimensional cases, even though it indirectly maximizes the MI between its inputs. Our work highlights the potential of KR to advance the field of graph representation learning and enhance the performance of GNNs. The code to reproduce our experiments is available at https://github.com/Anonymous1252022/KR_for_GNNs
翻译:图表神经网络(GNNS)已成为处理图表结构数据的一个强大工具,但在有效汇集和传播各层之间信息方面仍面临挑战,这限制了它们的业绩。我们通过内核回归(KR)方法解决这一问题,将KR损失作为自我监督环境中的主要损失,或作为受监督环境中的一个正规化术语。与多个传输和感化节点分类数据集的两种情景相比,我们表现出显著的性能改进,特别是对于深层网络而言。相对于相互信息(MI)而言,KR损失是二次曲线,在高维情况下很容易估计,尽管它间接地最大限度地扩大了其投入之间的MI。我们的工作突出了KR在推进图形演示学习领域和提高GNNs绩效方面的潜力。我们复制实验的代码可在https://github.com/Anonymous1252022/Kr_for_GNNSs查阅。