Federated recommender system (FRS), which enables many local devices to train a shared model jointly without transmitting local raw data, has become a prevalent recommendation paradigm with privacy-preserving advantages. However, previous work on FRS performs similarity search via inner product in continuous embedding space, which causes an efficiency bottleneck when the scale of items is extremely large. We argue that such a scheme in federated settings ignores the limited capacities in resource-constrained user devices (i.e., storage space, computational overhead, and communication bandwidth), and makes it harder to be deployed in large-scale recommender systems. Besides, it has been shown that the transmission of local gradients in real-valued form between server and clients may leak users' private information. To this end, we propose a lightweight federated recommendation framework with privacy-preserving matrix factorization, LightFR, that is able to generate high-quality binary codes by exploiting learning to hash techniques under federated settings, and thus enjoys both fast online inference and economic memory consumption. Moreover, we devise an efficient federated discrete optimization algorithm to collaboratively train model parameters between the server and clients, which can effectively prevent real-valued gradient attacks from malicious parties. Through extensive experiments on four real-world datasets, we show that our LightFR model outperforms several state-of-the-art FRS methods in terms of recommendation accuracy, inference efficiency and data privacy.
翻译:联邦推荐系统(FRS)使许多本地设备能够联合培训一个共同模型而无需传输当地原始数据,这已成为一个普遍的建议模式,具有保护隐私的优势。然而,以往关于联邦推荐系统的工作通过连续嵌入空间的内部产品进行类似搜索,当物品规模巨大时,这会造成效率瓶颈。我们争辩说,在联邦环境中的这种办法忽视了资源有限的用户装置(即存储空间、计算间接费用和通信带宽)的有限能力,使得在大型推荐系统中更难部署。此外,我们已表明,服务器和客户之间以实际价值格式传输本地梯度可能会泄露用户的私人信息。为此,我们提出了一个带有隐私保存矩阵因子化的轻量级封装建议框架。 Light FR,它能够通过利用在联邦化环境中的学习技术来生成高质量的二进制代码,从而获得快速的在线推断和经济记忆消耗。此外,我们还设计了一个高效的硬化离散的离心型缩缩缩缩缩缩缩缩算法,以合作性缩略图方式将四个服务器和纸质缩缩缩缩图显示我们四个服务器和纸质数据客户之间的标准。