As privacy protection receives much attention, unlearning the effect of a specific node from a pre-trained graph learning model has become equally important. However, due to the node dependency in the graph-structured data, representation unlearning in Graph Neural Networks (GNNs) is challenging and less well explored. In this paper, we fill in this gap by first studying the unlearning problem in linear-GNNs, and then introducing its extension to non-linear structures. Given a set of nodes to unlearn, we propose PROJECTOR that unlearns by projecting the weight parameters of the pre-trained model onto a subspace that is irrelevant to features of the nodes to be forgotten. PROJECTOR could overcome the challenges caused by node dependency and enjoys a perfect data removal, i.e., the unlearned model parameters do not contain any information about the unlearned node features which is guaranteed by algorithmic construction. Empirical results on real-world datasets illustrate the effectiveness and efficiency of PROJECTOR.
翻译:由于隐私保护受到很大关注,从经过预先训练的图表学习模式中不学习特定节点的效果也变得同样重要,然而,由于图形结构数据中的节点依赖性,图形神经网络中的代表不学习是富有挑战性的,探索得较少。在本文中,我们填补这一空白的方法是首先研究线性-GNN的未学习问题,然后将其推广到非线性结构。鉴于一套对未读的节点,我们建议PROJECtor通过将预先训练的模式的重量参数投射到一个与即将被遗忘的节点特征无关的子空间,从而不留。PROJECtor可以克服节点依赖性造成的挑战,并享有完美的数据删除,即,未学习的模型参数并不包含任何关于由算法构造保证的未学习节点特征的信息。关于真实世界数据集的实证结果说明了PROJECtor的有效性和效率。