In this work, we present a federated version of the state-of-the-art Neural Collaborative Filtering (NCF) approach for item recommendations. The system, named FedNCF, allows learning without requiring users to expose or transmit their raw data. Experimental validation shows that FedNCF achieves comparable recommendation quality to the original NCF system. Although federated learning (FL) enables learning without raw data transmission, recent attacks showed that FL alone does not eliminate privacy concerns. To overcome this challenge, we integrate a privacy-preserving enhancement with a secure aggregation scheme that satisfies the security requirements against an honest-but-curious (HBC) entity, without affecting the quality of the original model. Finally, we discuss the peculiarities observed in the application of FL in a collaborative filtering (CF) task as well as we evaluate the privacy-preserving mechanism in terms of computational cost.
翻译:在这项工作中,我们为项目建议提出了一个最新神经合作过滤(NCF)联合版本。这个名为FedNCF的系统允许在不要求用户披露或传输原始数据的情况下进行学习。实验性验证表明,FedNCF达到了与原NCF系统相似的建议质量。虽然Federal学习(FL)使学习无需原始数据传输,但最近的攻击表明,光是FL并不能消除隐私关切。为了克服这一挑战,我们把隐私保护增强与一个安全合并计划结合起来,这个计划既能满足对诚实但有争议(HBC)的实体的安全要求,又不影响原始模型的质量。最后,我们讨论了在合作过滤(CFC)任务中应用FL时观察到的特殊性,我们从计算成本的角度评价了隐私保护机制。