Recent advancements in graph unlearning models have enhanced model utility by preserving the node representation essentially invariant, while using gradient ascent on the forget set to achieve unlearning. However, this approach causes a drastic degradation in model utility during the unlearning process due to the rapid divergence speed of gradient ascent. In this paper, we introduce \textbf{INPO}, an \textbf{I}nfluence-aware \textbf{N}egative \textbf{P}reference \textbf{O}ptimization framework that focuses on slowing the divergence speed and improving the robustness of the model utility to the unlearning process. Specifically, we first analyze that NPO has slower divergence speed and theoretically propose that unlearning high-influence edges can reduce impact of unlearning. We design an influence-aware message function to amplify the influence of unlearned edges and mitigate the tight topological coupling between the forget set and the retain set. The influence of each edge is quickly estimated by a removal-based method. Additionally, we propose a topological entropy loss from the perspective of topology to avoid excessive information loss in the local structure during unlearning. Extensive experiments conducted on five real-world datasets demonstrate that INPO-based model achieves state-of-the-art performance on all forget quality metrics while maintaining the model's utility. Codes are available at \href{https://github.com/sh-qiangchen/INPO}{https://github.com/sh-qiangchen/INPO}.
翻译:近年来,图遗忘学习模型通过保持节点表示基本不变,同时在遗忘集上使用梯度上升来实现遗忘,从而提升了模型效用。然而,由于梯度上升的发散速度过快,这种方法在遗忘过程中会导致模型效用急剧下降。本文提出了 **INPO**,一个**基**于**影**响力的**负**偏好**优**化框架,旨在减缓发散速度并提高模型效用对遗忘过程的鲁棒性。具体而言,我们首先分析了负偏好优化具有较慢的发散速度,并从理论上提出遗忘高影响力边可以减少遗忘的影响。我们设计了一个基于影响力的消息函数,以放大被遗忘边的影响力并缓解遗忘集与保留集之间紧密的拓扑耦合。每条边的影响力通过一种基于移除的方法快速估计。此外,我们从拓扑角度提出了一种拓扑熵损失,以避免在遗忘过程中局部结构过度信息损失。在五个真实世界数据集上进行的大量实验表明,基于INPO的模型在所有遗忘质量指标上均达到了最先进的性能,同时保持了模型的效用。代码发布于 \href{https://github.com/sh-qiangchen/INPO}{https://github.com/sh-qiangchen/INPO}。