Deep learning models on graphs have achieved remarkable performance in various graph analysis tasks, e.g., node classification, link prediction and graph clustering. However, they expose uncertainty and unreliability against the well-designed inputs, i.e., adversarial examples. Accordingly, various studies have emerged for both attack and defense addressed in different graph analysis tasks, leading to the arms race in graph adversarial learning. For instance, the attacker has poisoning and evasion attack, and the defense group correspondingly has preprocessing- and adversarial- based methods. Despite the booming works, there still lacks a unified problem definition and a comprehensive review. To bridge this gap, we investigate and summarize the existing works on graph adversarial learning tasks systemically. Specifically, we survey and unify the existing works w.r.t. attack and defense in graph analysis tasks, and give proper definitions and taxonomies at the same time. Besides, we emphasize the importance of related evaluation metrics, and investigate and summarize them comprehensively. Hopefully, our works can serve as a reference for the relevant researchers, thus providing assistance for their studies. More details of our works are available at https://github.com/gitgiter/Graph-Adversarial-Learning.
翻译:图表上的深深学习模型在各种图表分析任务(如节点分类、链接预测和图表群)中取得了显著的成绩,然而,这些模型暴露出对设计良好的投入(即对抗性实例)的不确定性和不可靠性,因此,不同图表分析任务中针对攻击和防御的不同研究已经出现,导致图表对抗性学习中的军备竞赛。例如,攻击者有中毒和逃避攻击,而国防小组也相应地有预处理和对抗性方法。尽管工作正在蓬勃发展,但问题定义和全面审查仍然缺乏统一。为弥补这一差距,我们系统地调查和总结关于图表对抗性学习任务的现有工作。具体地说,我们在图表分析任务中调查并统一现有的攻击和防御工作,同时给出适当的定义和分类。此外,我们强调相关评价指标的重要性,并全面调查和总结。希望我们的工作能够作为相关研究人员的参考,从而为其研究提供协助。我们工作的更多细节可在 https://giharib./Graghistr/Gragivard查阅。