We consider private federated learning (FL), where a server aggregates differentially private gradient updates from a large number of clients in order to train a machine learning model. The main challenge is balancing privacy with both classification accuracy of the learned model as well as the amount of communication between the clients and server. In this work, we build on a recently proposed method for communication-efficient private FL -- the MVU mechanism -- by introducing a new interpolation mechanism that can accommodate a more efficient privacy analysis. The result is the new Interpolated MVU mechanism that provides SOTA results on communication-efficient private FL on a variety of datasets.
翻译:我们考虑的是私人联合学习(FL),在这种学习中,服务器将大量客户的私人梯度更新有差异地汇总在一起,以便培训一个机器学习模式;主要挑战是平衡隐私与所学模式的分类准确性以及客户与服务器之间的通信量;在这项工作中,我们以最近提出的通信效率高的私人FL(MVU机制)方法为基础,采用新的内插机制,以进行更有效的隐私分析;结果是新的内插MVU机制,向SOTA提供关于通信效率高的私人FL(各种数据集)的结果。