Recently, recommender systems have achieved promising performances and become one of the most widely used web applications. However, recommender systems are often trained on highly sensitive user data, thus potential data leakage from recommender systems may lead to severe privacy problems. In this paper, we make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference. In contrast with traditional membership inference against machine learning classifiers, our attack faces two main differences. First, our attack is on the user-level but not on the data sample-level. Second, the adversary can only observe the ordered recommended items from a recommender system instead of prediction results in the form of posterior probabilities. To address the above challenges, we propose a novel method by representing users from relevant items. Moreover, a shadow recommender is established to derive the labeled training data for training the attack model. Extensive experimental results show that our attack framework achieves a strong performance. In addition, we design a defense mechanism to effectively mitigate the membership inference threat of recommender systems.
翻译:最近,推荐人系统取得了有希望的性能,成为了最广泛使用的网络应用程序之一。然而,推荐人系统往往在高度敏感的用户数据方面受过培训,因此,推荐人系统潜在的数据渗漏可能导致严重的隐私问题。在本文件中,我们首次尝试通过成员推理的镜头来量化推荐人系统的隐私渗漏。与传统会员对机器学习分类者的推论不同,我们的攻击面临两个主要差异。首先,我们的攻击是在用户一级,而不是在数据抽样一级。第二,对手只能观察推荐人系统订购的推荐项目,而不是以事后概率为形式的预测结果。为了应对上述挑战,我们建议了一种新方法,代表相关项目的用户。此外,还建立了一个影子推荐人,以获取用于培训攻击模型的标签培训数据。广泛的实验结果显示,我们的攻击框架取得了很强的性能。此外,我们设计了一个防御机制,以有效减轻推荐人系统的成员推论威胁。