Recent years have witnessed the rapid growth of federated learning (FL), an emerging privacy-aware machine learning paradigm that allows collaborative learning over isolated datasets distributed across multiple participants. The salient feature of FL is that the participants can keep their private datasets local and only share model updates. Very recently, some research efforts have been initiated to explore the applicability of FL for matrix factorization (MF), a prevalent method used in modern recommendation systems and services. It has been shown that sharing the gradient updates in federated MF entails privacy risks on revealing users' personal ratings, posing a demand for protecting the shared gradients. Prior art is limited in that they incur notable accuracy loss, or rely on heavy cryptosystem, with a weak threat model assumed. In this paper, we propose VPFedMF, a new design aimed at privacy-preserving and verifiable federated MF. VPFedMF provides guarantees on the confidentiality of individual gradient updates through lightweight and secure aggregation. Moreover, VPFedMF ambitiously and newly supports correctness verification of the aggregation results produced by the coordinating server in federated MF. Experiments on a real-world movie rating dataset demonstrate the practical performance of VPFedMF in terms of computation, communication, and accuracy.
翻译:近年来,联合会学习(FL)迅速发展,这是一种新兴的隐私意识机器学习模式,它使得能够对分布在多个参与者之间的孤立数据集进行协作学习;FL的显著特点是参与者可以保持其私人数据集的本地化,只能分享模式更新;最近,已经开始一些研究工作,探索FL适用于矩阵因子化(MF),这是现代建议系统和服务中常用的一种普遍方法;已经表明,在联邦化的MF中共享梯度更新,会给披露用户个人评级带来隐私风险,从而产生保护共享梯度的需求;以往的艺术是有限的,因为它们会发生显著的准确损失,或依赖重的加密系统,假设威胁模式较弱;在本文件中,我们提议了VPFedMF,这是旨在保护隐私和可核查的联邦化系数(MFF)的新设计;VPFedM为个人梯度更新的保密性提供了保障;此外,VPFedMF雄心勃勃勃勃和新支持了对协调服务器在FMFFF中产生的汇总结果进行正确性核查。