Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. However, due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to apply to data-sensitive scenarios. Federated learning (FL) is an emerging technology developed for privacy-preserving settings when several parties need to train a shared global model collaboratively. Although several research works have applied FL to train GNNs (Federated GNNs), there is no research on their robustness to backdoor attacks. This paper bridges this gap by conducting two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). CBA is conducted by embedding the same global trigger during training for every malicious party, while DBA is conducted by decomposing a global trigger into separate local triggers and embedding them into the training datasets of different malicious parties, respectively. Our experiments show that the DBA attack success rate is higher than CBA in almost all evaluated cases. For CBA, the attack success rate of all local triggers is similar to the global trigger even if the training set of the adversarial party is embedded with the global trigger. To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for different number of clients, trigger sizes, poisoning intensities, and trigger densities. Moreover, we explore the robustness of DBA and CBA against two state-of-the-art defenses. We find that both attacks are robust against the investigated defenses, necessitating the need to consider backdoor attacks in Federated GNNs as a novel threat that requires custom defenses.
翻译:神经网络(GNNs)是一组深层次的基于学习的处理图形域信息的方法。 GNNs最近成为了一种广泛使用的图表分析方法,原因是他们学习复杂图形数据演示的超强能力。 但是,由于隐私关切和监管限制,集中的GNNs可能难以适用于数据敏感情景。 联邦学习(FL)是为隐私保护环境而开发的一种新兴技术,当几个方面需要合作培训一个共享的全球模型时。 尽管一些研究工作应用FL来培训GNS(Feded GNNs), 但没有研究它们对于后门攻击的强性。 本文通过在Fed GNNS公司进行两种类型的后门攻击来弥补这一差距:集中的后门攻击(CBA)和分散的后门攻击(DBA)。 联邦学习(FL)是一个在培训中嵌入相同的全球触发器,而DBA公司则分别将全球触发后门级攻击的强性攻击率提高到我们公司内部攻击的硬性攻击率。 我们实验显示DBA公司攻击的硬性攻击率比 CBA公司的硬性硬性攻击的硬性攻击率都比C 。