Graph neural network (GNN) models have achieved great success on graph representation learning. Challenged by large scale private data collection from user-side, GNN models may not be able to reflect the excellent performance, without rich features and complete adjacent relationships. Addressing to the problem, vertical federated learning (VFL) is proposed to implement local data protection through training a global model collaboratively. Consequently, for graph-structured data, it is natural idea to construct VFL framework with GNN models. However, GNN models are proven to be vulnerable to adversarial attacks. Whether the vulnerability will be brought into the VFL has not been studied. In this paper, we devote to study the security issues of GNN based VFL (GVFL), i.e., robustness against adversarial attacks. Further, we propose an adversarial attack method, named Graph-Fraudster. It generates adversarial perturbations based on the noise-added global node embeddings via GVFL's privacy leakage, and the gradient of pairwise node. First, it steals the global node embeddings and sets up a shadow server model for attack generator. Second, noises are added into node embeddings to confuse the shadow server model. At last, the gradient of pairwise node is used to generate attacks with the guidance of noise-added node embeddings. To the best of our knowledge, this is the first study of adversarial attacks on GVFL. The extensive experiments on five benchmark datasets demonstrate that Graph-Fraudster performs better than three possible baselines in GVFL. Furthermore, Graph-Fraudster can remain a threat to GVFL even if two possible defense mechanisms are applied. This paper reveals that GVFL is vulnerable to adversarial attack similar to centralized GNN models.
翻译:图表神经网络(GNN) 模型在图形代表制学习中取得了巨大成功。 在用户方大规模私人数据收集的挑战下, GNN模型可能无法反映卓越的性能,没有丰富的特征和完整的相邻关系。 解决这个问题, 垂直联合学习(VFL) 提议通过培训全球模型来实施本地数据保护。 因此, 对于图形结构化数据, 以 GNN 模型构建 VFL 框架是自然的想法。 但是, GNN 模型被证明很容易受到对抗性攻击。 是否广泛将弱点引入 VFL 。 还没有研究过VFll 模型的弱点。 在本文中, 我们专门研究GNNF VF VF (GVFL) (GVFL) (GVFLL) (GVFLL) (GVFLF) (GVFLF) (GVFLF) 的安全问题, 即对对抗性攻击性攻击的强性研究。 此外, 我们提出对抗性G- FlF 的G 基准模型的防盗用最弱性模型进行更深的软的软的软的软的软的软的模型,, 向 GrevlF 的G 的G 演示式的模型显示显示。