In graph neural networks (GNNs), both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP). While it is common in GSP to impose signal smoothness constraints in learning and estimation tasks, it is unclear how this can be done for discrete node labels. We bridge this gap by introducing the concept of distributional graph signals. In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals. We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks. Numerical experiments demonstrate that our method can significantly improve the performance of most base GNN models in different problem settings.
翻译:在图神经网络(GNN)中,节点特征和标签都是图信号处理(GSP)中的关键概念。在GSP中,强调对信号平滑性的约束是常见的学习和估计任务方法,在离散节点标签学习中如何约束平滑性值得研究。本文通过引入分布式图信号的概念而弥合此间隙。在我们的框架中,我们处理节点标签的分布而不是它们的值,并提出了分布式图信号平滑性和非均匀性的概念。然后,我们提出了一种通用的正则化方法,将模型输出的分布平滑性和非均匀性编码到半监督节点分类任务中的GNN。数值实验表明,我们的方法可以在不同的问题设置中显着提高大多数基本GNN模型的性能。