Recently, a lot of research attention has been devoted to exploring Web security, a most representative topic is the adversarial robustness of graph mining algorithms. Especially, a widely deployed adversarial attacks formulation is the graph manipulation attacks by modifying the relational data to mislead the Graph Neural Networks' (GNNs) predictions. Naturally, an intrinsic question one would ask is whether we can accurately identify the manipulations over graphs - we term this problem as poisoned graph sanitation. In this paper, we present FocusedCleaner, a poisoned graph sanitation framework consisting of two modules: bi-level structural learning and victim node detection. In particular, the structural learning module will reserve the attack process to steadily sanitize the graph while the detection module provides the "focus" - a narrowed and more accurate search region - to structural learning. These two modules will operate in iterations and reinforce each other to sanitize a poisoned graph step by step. Extensive experiments demonstrate that FocusedCleaner outperforms the state-of-the-art baselines both on poisoned graph sanitation and improving robustness.
翻译:最近,许多研究注意力都集中在探索网络安全上,一个最具代表性的主题是图表采矿算法的对抗性强度。特别是,广泛部署的对抗性攻击配方是图形操纵攻击,修改相关数据以误导图形神经网络(GNNS)的预测。自然,一个内在的问题是,我们是否能够准确地识别图形的操纵-我们把这个问题称为有毒图形卫生。在本文中,我们介绍了一个有毒的图表卫生框架,由两个模块组成:双级结构学习和受害者节点检测。特别是,结构学习模块将保留攻击过程以稳定地使图形清洁化,而检测模块则提供“焦点”-一个缩小和更加精确的搜索区域-到结构学习。这两个模块将在循环中运作,并相互加强,以一步一步地清理一个有毒的图表。广泛的实验表明,“聚焦链”超越了中毒图形卫生和改善坚固度方面的最先进的基线。