Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. However, due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to apply to data-sensitive scenarios. Federated learning (FL) is an emerging technology developed for privacy-preserving settings when several parties need to train a shared global model collaboratively. Although several research works have applied FL to train GNNs (Federated GNNs), there is no research on their robustness to backdoor attacks. This paper bridges this gap by conducting two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Our experiments show that the DBA attack success rate is higher than CBA in almost all evaluated cases. For CBA, the attack success rate of all local triggers is similar to the global trigger even if the training set of the adversarial party is embedded with the global trigger. To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities. Moreover, we explore the robustness of DBA and CBA against two state-of-the-art defenses. We find that both attacks are robust against the investigated defenses, necessitating the need to consider backdoor attacks in Federated GNNs as a novel threat that requires custom defenses.
翻译:神经网络(GNNs)是一组深层次的基于学习的处理图形域信息的方法。 GNNs最近已成为一种广泛使用的图表分析方法,原因是他们学习复杂图形数据演示的超能力。然而,由于隐私关切和监管限制,集中的GNNs可能难以适用于数据敏感情景。 联邦学习(FL)是为隐私保护环境而开发的新兴技术,当几个缔约方需要合作培训一个共享的全球模型时。 尽管一些研究项目已经应用FL来培训GNS(Freded GNNs),但没有研究它们对于后门攻击的强势。 本文通过在Fed GNNs中进行两种类型的后门攻击来弥补这一差距:集中的后门攻击(CBA)和分散的后门攻击(DBA)。我们的实验表明,DBA攻击成功率几乎在所有被评估的案件中都高于CBA。所有本地触发器的攻击成功率都类似于全球触发的强势,即使敌对方的培训组考虑加入全球触发器。为了进一步探索C-BA内部攻击的硬性攻击的硬性攻击性攻击性,我们在硬性GNNNPS的硬性攻击中需要两个硬性攻击。