Various attack methods against recommender systems have been proposed in the past years, and the security issues of recommender systems have drawn considerable attention. Traditional attacks attempt to make target items recommended to as many users as possible by poisoning the training data. Benifiting from the feature of protecting users' private data, federated recommendation can effectively defend such attacks. Therefore, quite a few works have devoted themselves to developing federated recommender systems. For proving current federated recommendation is still vulnerable, in this work we probe to design attack approaches targeting deep learning based recommender models in federated learning scenarios. Specifically, our attacks generate poisoned gradients for manipulated malicious users to upload based on two strategies (i.e., random approximation and hard user mining). Extensive experiments show that our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
翻译:过去几年来,针对推荐人系统提出了各种攻击方法,建议人系统的安全问题引起了相当的注意。传统攻击试图通过毒害培训数据,使尽可能多的用户得到推荐目标物品。贝尼从保护用户私人数据的特点出发,联合建议可以有效地保护这些攻击。因此,有相当多的工作致力于开发联合推荐人系统。为了证明目前联合推荐人系统的建议仍然很脆弱,我们在这项工作中调查设计攻击方法,针对在联合学习情景中基于深学习的推荐人模型。具体地说,我们的攻击为被操纵的恶意用户根据两种战略(即随机近似和硬用户采矿)上传产生了有毒的梯子。广泛的实验表明,我们精心设计的攻击可以有效地毒害目标模型,攻击效果决定了最先进的技术。