We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models. To this aim, we employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients. Noisy privacy-preserving gradients are used to perform stochastic gradient descent for training machine learning models. Sparsification, achieved by setting the smallest gradient entries to zero, can reduce the convergence speed of the training algorithm. However, by sparsification and compressed sensing, the dimension of communicated gradient and the magnitude of additive noise can be reduced. The interplay between these effects determines whether gradient sparsification improves the performance of differentially-private machine learning models. We investigate this analytically in the paper. We prove that, for small privacy budgets, compression can improve performance of privacy-preserving machine learning models. However, for large privacy budgets, compression does not necessarily improve the performance. Intuitively, this is because the effect of privacy-preserving noise is minimal in large privacy budget regime and thus improvements from gradient sparsification cannot compensate for its slower convergence.
翻译:我们使用梯度宽度来减少不同隐私噪音对私人机器学习模型性能的不利影响。为此,我们使用压缩感应和添加式拉皮尔噪声来评估差异性私人梯度。使用吵闹式隐私保护梯度来进行机械学习模型培训。通过将最小的梯度条目设为零而实现的简化可以降低培训算法的趋同速度。然而,通过施压和压缩感应,传送梯度的尺寸和添加式噪音的大小可以降低。这些效应之间的相互作用决定了梯度扩增是否改善了差异性私人机器学习模型的性能。我们在文件中对此进行了分析研究。我们证明,对于小型隐私预算而言,压缩可以改进隐私保护机器学习模型的性能。然而,对于大型隐私预算而言,压缩不一定能改善工作绩效。从直觉上看,这是因为在大型隐私预算制度中,保护隐私噪音的效果是最小的,因此,从梯度加固化的改进不能弥补其缓慢的趋同。