Graph Convolutional Networks (GCNs) have fueled a surge of research interest due to their encouraging performance on graph learning tasks, but they are also shown vulnerability to adversarial attacks. In this paper, an effective graph structural attack is investigated to disrupt graph spectral filters in the Fourier domain, which are the theoretical foundation of GCNs. We define the notion of spectral distance based on the eigenvalues of graph Laplacian to measure the disruption of spectral filters. We realize the attack by maximizing the spectral distance and propose an efficient approximation to reduce the time complexity brought by eigen-decomposition. The experiments demonstrate the remarkable effectiveness of the proposed attack in both black-box and white-box settings for both test-time evasion attacks and training-time poisoning attacks. Our qualitative analysis suggests the connection between the imposed spectral changes in the Fourier domain and the attack behavior in the spatial domain, which provides empirical evidence that maximizing spectral distance is an effective way to change the graph structural property and thus disturb the frequency components for graph filters to affect the learning of GCNs.
翻译:在本文中,对有效的图形结构攻击进行了调查,以破坏Fourier域的图形光谱过滤器,这是GCN的理论基础。我们根据图Laplacian的光值界定了光谱距离的概念,以测量光谱过滤器的干扰。我们通过最大限度地增加光谱距离而认识到了攻击,并提出有效的近似以降低eigen-decomposition带来的时间复杂性。实验表明,提议的黑箱和白箱攻击对于试验-规避攻击和培训-中毒攻击都非常有效。我们的质量分析表明,在Fourier域施加的光谱变化与空间域的攻击行为之间的联系,提供了经验证据,证明最大限度地扩大光谱距离是改变图形结构属性的有效方法,从而干扰了图形过滤器的频率组成部分,从而影响GCN的学习。