While preserving the privacy of federated learning (FL), differential privacy (DP) inevitably degrades the utility (i.e., accuracy) of FL due to model perturbations caused by DP noise added to model updates. Existing studies have considered exclusively noise with persistent root-mean-square amplitude and overlooked an opportunity of adjusting the amplitudes to alleviate the adverse effects of the noise. This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of FL and retain the capability of adjusting the learning performance. Specifically, we propose a geometric series form for the noise amplitude and reveal analytically the dependence of the series on the number of global aggregations and the $(\epsilon,\delta)$-DP requirement. We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise. Another important aspect is an upper bound developed for the loss function of a multi-layer perceptron (MLP) trained by FL running the new DP mechanism. Accordingly, the optimal number of global aggregations is obtained, balancing the learning and privacy. Extensive experiments are conducted using MLP, supporting vector machine, and convolutional neural network models on four public datasets. The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
翻译:虽然保留了联邦学习(FL)的隐私,但差异隐私(DP)不可避免地会降低FL的效用(即准确性),原因是在模型更新中添加了DP噪音,造成模型扰动。现有研究只考虑了具有持久性根平均值振幅的噪音,忽视了调整振幅以缓解噪音不利影响的机会。本文提出了一个新的DP扰动机制,它有时间变化的噪音振动,以保护FL的隐私,并保留调整学习性能的能力。具体地说,我们建议了噪音振幅的几何序列,并用分析方式揭示了该系列对全球集数和$(epsilon,\delta)-DP要求的依赖性。我们从网上改进了该系列,以防止FL因过度扰动噪音而过早地趋同。另一个重要方面是为管理新DP机制的FL培训的多层透视仪(ML)损失功能开发的上限。因此,支持全球群集的最佳数量是对全球总和四层稳定度数据网络进行对比。ML的精确度实验,这是利用不断变压的硬度数据网络进行。</s>