Graph Neural Networks (GNN) rely on graph convolutions to learn features from network data. GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters. In this paper we leverage the stability property of GNNs as a typing point in order to seek for representations that are stable within a distribution. We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice. We showcase our framework in real world data, corroborating that we are able to obtain more stable representations while not compromising the overall accuracy of the predictor.
翻译:神经网络图(GNN) 依靠图解组合从网络数据中学习特征。 GNNs 稳定到不同类型对底图的扰动,这是他们从图形过滤器中继承的属性。 在本文中,我们利用GNNs的稳定属性作为打字点,以寻求在分布中保持稳定的表述。 我们提出一种新的限制性学习方法,在选择的干扰中限制GNs的稳定状态。 我们用真实的世界数据展示了我们的框架,证实我们能够获得更稳定的表述,同时不损害预测器的总体准确性。