Recent work has shown that graph neural networks (GNNs) are vulnerable to adversarial attacks on graph data. Common attack approaches are typically informed, i.e. they have access to information about node attributes such as labels and feature vectors. In this work, we study adversarial attacks that are uninformed, where an attacker only has access to the graph structure, but no information about node attributes. Here the attacker aims to exploit structural knowledge and assumptions, which GNN models make about graph data. In particular, literature has shown that structural node centrality and similarity have a strong influence on learning with GNNs. Therefore, we study the impact of centrality and similarity on adversarial attacks on GNNs. We demonstrate that attackers can exploit this information to decrease the performance of GNNs by focusing on injecting links between nodes of low similarity and, surprisingly, low centrality. We show that structure-based uninformed attacks can approach the performance of informed attacks, while being computationally more efficient. With our paper, we present a new attack strategy on GNNs that we refer to as Structack. Structack can successfully manipulate the performance of GNNs with very limited information while operating under tight computational constraints. Our work contributes towards building more robust machine learning approaches on graphs.
翻译:最近的工作表明,图形神经网络(GNNs)很容易在图形数据上受到对抗性攻击。普通攻击方法一般都是知情的,即它们能够获得关于标签和特性矢量等节点属性的信息。在这项工作中,我们研究了不知情的对抗性攻击,攻击者只能接触图形结构,但没有关于节点属性的信息。攻击者的目的是利用GNN模式对图表数据所作的结构性知识和假设。特别是,文献表明,结构节点中心点和相似性对与GNNs的学习有着强烈的影响。因此,我们研究了对GNNs的对抗性攻击的中心和相似性的影响。我们证明,攻击者可以利用这一信息来降低GNNs的表现,方法是侧重于低相似度和令人惊讶的是低中心点之间的注射联系。我们表明,基于结构的不知情性攻击可以接近知情攻击的性攻击性攻击性攻击,同时进行更高效的计算。我们用我们的论文对GNNs提出了新的攻击性战略,我们称之为“Structck”。 Structcock可以成功地利用这一信息来调整我们GNNS的工作,同时进行严格的计算。