In the semi-supervised setting where labeled data are largely limited, it remains to be a big challenge for message passing based graph neural networks (GNNs) to learn feature representations for the nodes with the same class label that is distributed discontinuously over the graph. To resolve the discontinuous information transmission problem, we propose a control principle to supervise representation learning by leveraging the prototypes (i.e., class centers) of labeled data. Treating graph learning as a discrete dynamic process and the prototypes of labeled data as "desired" class representations, we borrow the pinning control idea from automatic control theory to design learning feedback controllers for the feature learning process, attempting to minimize the differences between message passing derived features and the class prototypes in every round so as to generate class-relevant features. Specifically, we equip every node with an optimal controller in each round through learning the matching relationships between nodes and the class prototypes, enabling nodes to rectify the aggregated information from incompatible neighbors in a graph with strong heterophily. Our experiments demonstrate that the proposed PCGCN model achieves better performances than deep GNNs and other competitive heterophily-oriented methods, especially when the graph has very few labels and strong heterophily.
翻译:在标签数据基本有限的半监督环境中,在标签数据基本上有限的情况下,对于通过基于图形神经网络(GNNS)的信息传递信息来说,这仍然是一项巨大的挑战,因为信息传递的图形神经网络(GNNS)需要学习与图表不连续分布的同一类标签的节点的特征表现。为了解决不连续的信息传输问题,我们提出了一个控制原则,通过利用标签数据的原型(即类中心)来监督代表性学习。将图表学习作为一个离散动态过程,将标签数据原型作为“渴望”类演示,我们借用自动控制理论的定点控制理念,为特征学习过程设计学习反馈控制器,试图将信息传递的特性和每个回合的类原型之间的差异最小化,以便产生类相关特性。具体地说,我们通过学习节点和类原型之间的匹配关系,使节点能够用强烈的偏差图解来纠正来自不相容的邻居的汇总信息。我们的实验表明,拟议的PCGCN模型比深层GNS和高竞争力的超高型结构,特别具有很强的图形导向。</s>