Non-negative matrix factorization (NMF) with missing-value completion is a well-known effective Collaborative Filtering (CF) method used to provide personalized user recommendations. However, traditional CF relies on the privacy-invasive collection of users' explicit and implicit feedback to build a central recommender model. One-shot federated learning has recently emerged as a method to mitigate the privacy problem while addressing the traditional communication bottleneck of federated learning. In this paper, we present the first unsupervised one-shot federated CF implementation, named FedSPLIT, based on NMF joint factorization. In our solution, the clients first apply local CF in-parallel to build distinct client-specific recommenders. Then, the privacy-preserving local item patterns and biases from each client are shared with the processor to perform joint factorization in order to extract the global item patterns. Extracted patterns are then aggregated to each client to build the local models via knowledge distillation. In our experiments, we demonstrate the feasibility of our approach with standard recommendation datasets. FedSPLIT can obtain similar results than the state of the art (and even outperform it in certain situations) with a substantial decrease in the number of communications.
翻译:无负值矩阵因子化(NMF)是众所周知的一种有效的协作过滤法,用于提供个性化用户建议,但传统CF依靠收集用户的隐私侵入式直线和隐含反馈,以建立一个中央建议模式。最近出现了一个截图联合学习,作为缓解隐私问题的方法,同时解决传统通信封存的联邦学习瓶颈问题。在本文件中,我们介绍了第一个未经监督的一次性FedSPLIT(FSPSPLIT)联合实施FFSF。在我们的解决办法中,客户首先使用本地CF内插线来建立与客户不同的建议。然后,每个客户的隐私保护项目模式和偏向与处理者分享,以便进行联合因子化,从而提取全球项目模式。随后,将提取模式汇总到每个客户,以便通过知识蒸馏建立本地模型。在我们的实验中,我们用标准建议数据集展示了我们的方法的可行性。FedSPLIT(FSPLIT)可以取得类似的结果,在一定的艺术状态下,在一定的通信状态下,甚至可以取得类似的结果。