Federated recommender system (FRS), which enables many local devices to train a shared model jointly without transmitting local raw data, has become a prevalent recommendation paradigm with privacy-preserving advantages. However, previous work on FRS performs similarity search via inner product in continuous embedding space, which causes an efficiency bottleneck when the scale of items is extremely large. We argue that such a scheme in federated settings ignores the limited capacities in resource-constrained user devices (i.e., storage space, computational overhead, and communication bandwidth), and makes it harder to be deployed in large-scale recommender systems. Besides, it has been shown that transmitting local gradients in real-valued form between server and clients may leak users' private information. To this end, we propose a lightweight federated recommendation framework with privacy-preserving matrix factorization, LightFR, that is able to generate high-quality binary codes by exploiting learning to hash technique under federated settings, and thus enjoys both fast online inference and economic memory consumption. Moreover, we devise an efficient federated discrete optimization algorithm to collaboratively train model parameters between the server and clients, which can effectively prevent real-valued gradient attacks from malicious parties. Through extensive experiments on four real-world datasets, we show that our LightFR model outperforms several state-of-the-art FRS methods in terms of recommendation accuracy, inference efficiency and data privacy.
翻译:联邦推荐系统(FRS)使许多本地设备能够联合培训一个共同模型而无需传输当地原始数据,这已成为一个普遍的建议模式,具有保护隐私的优势。然而,过去关于联邦推荐系统的工作通过连续嵌入空间的内部产品进行类似搜索,当物品规模巨大时,这会造成效率瓶颈。我们争辩说,在联邦环境中的这种办法忽视了资源有限的用户装置(即储存空间、计算间接费用和通信带宽)的有限能力,使得在大型推荐系统中更难部署。此外,已经表明,服务器和客户之间以实际价值格式传输本地梯度可能会泄露用户的私人信息。为此,我们提议了一个有隐私保存矩阵因子化的轻量化联邦推荐框架,通过利用在联邦化环境中学习技术,从而能够产生高质量的硬质硬质硬质代码,从而享有快速的在线引用和经济记忆消耗。此外,我们设计了一个高效的联邦服务器和客户之间以实际价值格式传输的精确度格式的本地梯度参数,从而有效展示了我们全球数据库中的一系列数据格式。