With the success of the graph embedding model in both academic and industry areas, the robustness of graph embedding against adversarial attack inevitably becomes a crucial problem in graph learning. Existing works usually perform the attack in a white-box fashion: they need to access the predictions/labels to construct their adversarial loss. However, the inaccessibility of predictions/labels makes the white-box attack impractical to a real graph learning system. This paper promotes current frameworks in a more general and flexible sense -- we demand to attack various kinds of graph embedding models with black-box driven. We investigate the theoretical connections between graph signal processing and graph embedding models and formulate the graph embedding model as a general graph signal process with a corresponding graph filter. Therefore, we design a generalized adversarial attacker: GF-Attack. Without accessing any labels and model predictions, GF-Attack can perform the attack directly on the graph filter in a black-box fashion. We further prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models. To validate the generalization of GF-Attack, we construct the attacker on four popular graph embedding models. Extensive experiments validate the effectiveness of GF-Attack on several benchmark datasets.
翻译:随着图表嵌入模型在学术和工业领域的成功,图表嵌入攻击的稳健性必然成为图表学习中的一个关键问题。现有的工程通常以白箱方式进行攻击:它们需要访问预测/标签来构建其对抗性损失。然而,由于无法获取预测/标签,白箱袭击对真实的图表学习系统来说不切实际。本文以更笼统和灵活的方式促进当前框架 -- -- 我们要求用黑箱驱动的黑箱驱动攻击各种图形嵌入模型。我们调查图形信号处理和图形嵌入模型之间的理论联系,并将图形嵌入模型设计成一个通用图表信号进程,并配以相应的图表过滤器。因此,我们设计了一个通用的对抗性攻击者:GF-Attack。在不使用任何标签和模型预测的情况下,GF-Atack能够直接对图表过滤器进行攻击。我们进一步证明GF-Atack可以在不了解图表嵌入模型的层数的情况下进行有效的攻击。我们用一些图表嵌入模型来验证GF-GFA模型的通用性测试。