Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. However, due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to apply to data-sensitive scenarios. Federated learning (FL) is an emerging technology developed for privacy-preserving settings when several parties need to train a shared global model collaboratively. Although many research works have applied FL to train GNNs (Federated GNNs), there is no research on their robustness to backdoor attacks. This paper bridges this gap by conducting two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). CBA is conducted by embedding the same global trigger during training for every malicious party, while DBA is conducted by decomposing a global trigger into separate local triggers and embedding them into the training dataset of different malicious parties, respectively. Our experiments show that the DBA attack success rate is higher than CBA in almost all evaluated cases, while rarely, the DBA attack performance is close to CBA. For CBA, the attack success rate of all local triggers is similar to the global trigger even if the training set of the adversarial party is embedded with the global trigger. To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for different trigger sizes, poisoning intensities, and trigger densities, with trigger density being the most influential.
翻译:神经网络(GNNs)是一组深层次的基于学习的处理图形域信息的方法。 GNNs最近已成为一种广泛使用的图表分析方法,原因是他们学习复杂图形数据演示的超强能力。然而,由于隐私关切和监管限制,集中的GNNs可能难以适用于数据敏感情景。 联邦学习(FL)是为隐私保护环境而开发的一种新兴技术,当几个方面需要合作培训一个共享的全球模型时。 尽管许多研究工作都应用FL来培训GNS(Fed GNNs),但是没有研究它们对于后门攻击的强度。 本文通过在Fed GNNs(CBA)进行两类幕后攻击:集中的后门攻击(CBA)和分散的后门攻击(DBA)。 中央学习(FL)是针对每个恶意方的训练中同样的全球触发因素,而DBA攻击的近距离成功率甚至超过CBA的C级攻击率, 几乎是CBA的触发因素。