Conventional matrix factorization relies on centralized collection of users' data for recommendation, which might introduce an increased risk of privacy leakage especially when the recommender is untrusted. Existing differentially private matrix factorization methods either assume the recommender is trusted, or can only provide a uniform level of privacy protection for all users and items with untrusted recommender. In this paper, we propose a novel Heterogeneous Differentially Private Matrix Factorization algorithm (denoted as HDPMF) for untrusted recommender. To the best of our knowledge, we are the first to achieve heterogeneous differential privacy for decentralized matrix factorization in untrusted recommender scenario. Specifically, our framework uses modified stretching mechanism with an innovative rescaling scheme to achieve better trade off between privacy and accuracy. Meanwhile, by allocating privacy budget properly, we can capture homogeneous privacy preference within a user/item but heterogeneous privacy preference across different users/items. Theoretical analysis confirms that HDPMF renders rigorous privacy guarantee, and exhaustive experiments demonstrate its superiority especially in strong privacy guarantee, high dimension model and sparse dataset scenario.
翻译:常规矩阵要素化依靠集中收集用户数据作为建议,这可能会增加隐私泄漏的风险,特别是在建议者不信任的情况下。现有的差别私人矩阵要素化方法要么假设建议者信任,要么只能为所有用户和不可信建议者提供统一程度的隐私保护。在本文中,我们建议为不值得信任的建议者采用新型的异质私有矩阵要素化算法(称为HDPMF ) 。据我们所知,我们是第一个在不可信建议者设想中实现分散化矩阵要素化的差别性隐私化的第一位。具体地说,我们的框架使用经修改的扩展机制,采用创新的调整机制,在隐私和准确性之间实现更好的交易。与此同时,通过适当分配隐私预算,我们可以在用户/项目中找到单一的隐私偏好,但在不同的用户/项目中则有不同的隐私偏好。理论分析证实,HDPMF提供了严格的隐私保障,并且详尽的实验表明其优越性,特别是在强大的隐私保障、高维度模型和分散的数据设置假设中。