Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. However, due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to apply to data-sensitive scenarios. Federated learning (FL) is an emerging technology developed for privacy-preserving settings when several parties need to train a shared global model collaboratively. Although several research works have applied FL to train GNNs (Federated GNNs), there is no research on their robustness to backdoor attacks. This paper bridges this gap by conducting two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Our experiments show that the DBA attack success rate is higher than CBA in almost all evaluated cases. For CBA, the attack success rate of all local triggers is similar to the global trigger even if the training set of the adversarial party is embedded with the global trigger. To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities. Moreover, we explore the robustness of DBA and CBA against two defenses. We find that both attacks are robust against the investigated defenses, necessitating the need to consider backdoor attacks in Federated GNNs as a novel threat that requires custom defenses.
翻译:神经网络(GNNs)是一组深层次的基于学习的处理图形域信息的方法。 GNNs最近成为了一种广泛使用的图表分析方法,因为他们学习复杂图形数据演示的超能力。然而,由于隐私关切和监管限制,集中的GNNs可能难以适用于数据敏感情景。 联邦学习(FL)是为隐私保护环境而开发的一种新兴技术,当几个缔约方需要合作培训一个共享的全球模型时。 尽管一些研究项目已经应用FL来培训GNS(Feded GNNSs),但没有研究它们对于后门攻击的强势。 本文通过在Fed GNNs公司进行两种类型的后门攻击来弥补这一差距:集中的后门攻击(CBA)和分散的后门攻击(DBA)。我们的实验表明,在几乎所有评估的案件中,DBA攻击的成功率都高于C。 所有当地触发器的攻击成功率都类似于全球触发器,即使敌对方的训练组考虑全球触发器。为了进一步探索CBA攻击的稳性攻击的硬性攻击性攻击的特性,我们需要在硬性GNNNS的硬性攻击的硬性防御中 。