In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model. The main challenge in this setting is balancing privacy with both classification accuracy of the learnt model as well as the number of bits communicated between the clients and server. Prior work has achieved a good trade-off by designing a privacy-aware compression mechanism, called the minimum variance unbiased (MVU) mechanism, that numerically solves an optimization problem to determine the parameters of the mechanism. This paper builds upon it by introducing a new interpolation procedure in the numerical design process that allows for a far more efficient privacy analysis. The result is the new Interpolated MVU mechanism that is more scalable, has a better privacy-utility trade-off, and provides SOTA results on communication-efficient private FL on a variety of datasets.
翻译:在私人联合学习(FL)中,服务器对大量客户的私人最新信息进行了不同的汇总,以便培训机器学习模式。这一背景下的主要挑战是平衡隐私与所学模式的分类准确性以及客户与服务器之间沟通的位数。先前的工作通过设计一个隐私意识压缩机制(称为最小差异不偏(MVU)机制)实现了良好的权衡,该机制在数字上解决了确定机制参数的最优化问题。本文以这一机制为基础,在数字设计过程中引入了新的内插程序,从而可以进行效率更高的隐私分析。其结果是新的内插MVU机制更加可伸缩,具有更好的隐私效用交易,并在各种数据集中提供SOTA关于通信效率高的私人FL结果。</s>