Federated Recommender Systems (FedRecs) are considered privacy-preserving techniques to collaboratively learn a recommendation model without sharing user data. Since all participants can directly influence the systems by uploading gradients, FedRecs are vulnerable to poisoning attacks of malicious clients. However, most existing poisoning attacks on FedRecs are either based on some prior knowledge or with less effectiveness. To reveal the real vulnerability of FedRecs, in this paper, we present a new poisoning attack method to manipulate target items' ranks and exposure rates effectively in the top-$K$ recommendation without relying on any prior knowledge. Specifically, our attack manipulates target items' exposure rate by a group of synthetic malicious users who upload poisoned gradients considering target items' alternative products. We conduct extensive experiments with two widely used FedRecs (Fed-NCF and Fed-LightGCN) on two real-world recommendation datasets. The experimental results show that our attack can significantly improve the exposure rate of unpopular target items with extremely fewer malicious users and fewer global epochs than state-of-the-art attacks. In addition to disclosing the security hole, we design a novel countermeasure for poisoning attacks on FedRecs. Specifically, we propose a hierarchical gradient clipping with sparsified updating to defend against existing poisoning attacks. The empirical results demonstrate that the proposed defending mechanism improves the robustness of FedRecs.
翻译:联邦推荐系统(FedRecs)被视为协作学习推荐模型的隐私保护技术,无需共享用户数据。由于所有参与者都可以通过上传梯度来直接影响系统,因此FedRecs容易受到恶意客户端的毒化攻击。然而,现有的大多数针对FedRecs的毒化攻击要么基于某些先前的知识,要么效果较差。为揭示FedRecs的真正漏洞,在本文中,我们提出了一种新的毒化攻击方法,在不依赖任何预先的知识的情况下,有效地操纵了排名和暴露率。特别地,我们的攻击通过一组合成的恶意用户捏造的梯度来考虑目标项的替代商品,从而操纵目标项的暴露率。我们在两个实际应用的推荐数据集上使用两种广泛使用的FedRecs(Fed-NCF和Fed-LightGCN)进行了大量的实验。实验结果表明,我们的攻击可以显著提高不受欢迎的目标项的暴露率,而恶意用户的数量和全局时代较之现有的攻击方式少得多。除了披露安全漏洞之外,我们还设计了一种新颖的针对FedRecs毒化攻击的对策。具体而言,我们提出了一种分层梯度剪切和稀疏更新来抵御现有的毒化攻击。经验结果表明,所提出的防御机制提高了FedRecs的鲁棒性。