Federated Recommender Systems (FedRecs) are considered privacy-preserving techniques to collaboratively learn a recommendation model without sharing user data. Since all participants can directly influence the systems by uploading gradients, FedRecs are vulnerable to poisoning attacks of malicious clients. However, most existing poisoning attacks on FedRecs are either based on some prior knowledge or with less effectiveness. To reveal the real vulnerability of FedRecs, in this paper, we present a new poisoning attack method to manipulate target items' ranks and exposure rates effectively in the top-$K$ recommendation without relying on any prior knowledge. Specifically, our attack manipulates target items' exposure rate by a group of synthetic malicious users who upload poisoned gradients considering target items' alternative products. We conduct extensive experiments with two widely used FedRecs (Fed-NCF and Fed-LightGCN) on two real-world recommendation datasets. The experimental results show that our attack can significantly improve the exposure rate of unpopular target items with extremely fewer malicious users and fewer global epochs than state-of-the-art attacks. In addition to disclosing the security hole, we design a novel countermeasure for poisoning attacks on FedRecs. Specifically, we propose a hierarchical gradient clipping with sparsified updating to defend against existing poisoning attacks. The empirical results demonstrate that the proposed defending mechanism improves the robustness of FedRecs.
翻译:虚拟用户下的联邦推荐系统攻击及其对策研究
联邦化推荐系统是协作学习推荐模型而不共享用户数据的隐私保护技术。由于所有参与者都可以通过上传梯度来直接影响系统,因此联邦化推荐系统易受恶意客户端的攻击,特别是毒化攻击。然而,现有大多数联邦化推荐系统上的毒化攻击基于先验知识,或者攻击效果较低。为了揭示联邦化推荐系统的真正漏洞,本文提出了一种新的攻击方法,针对目标物品的推荐效果进行了有效的攻击,并且不依赖于任何先验知识。具体来说,我们使用一组虚拟恶意用户上传感染梯度,来考虑目标物品的替代产品,从而操纵目标物品的曝光率。我们在两个实际的推荐数据集上使用了两个广泛使用的联邦化推荐系统(Fed-NCF和Fed-LightGCN)进行了大量实验。实验结果表明,我们的攻击可以显着提高不受欢迎目标物品的曝光率,且所需的恶意用户和全局时期均极少,同时具有比现有攻击更小的影响范围。除了揭示安全漏洞,我们还设计了一种用于对抗联邦化推荐系统的毒化攻击的新型对策机制。具体而言,我们提出了一种分层梯度剪裁与稀疏更新的方案来抵御现有的毒化攻击。实证结果表明,所提出的防御机制提高了联邦化推荐系统的鲁棒性。