Differentially private SGD (DP-SGD) is one of the most popular methods for solving differentially private empirical risk minimization (ERM). Due to its noisy perturbation on each gradient update, the error rate of DP-SGD scales with the ambient dimension $p$, the number of parameters in the model. Such dependence can be problematic for over-parameterized models where $p \gg n$, the number of training samples. Existing lower bounds on private ERM show that such dependence on $p$ is inevitable in the worst case. In this paper, we circumvent the dependence on the ambient dimension by leveraging a low-dimensional structure of gradient space in deep networks -- that is, the stochastic gradients for deep nets usually stay in a low dimensional subspace in the training process. We propose Projected DP-SGD that performs noise reduction by projecting the noisy gradients to a low-dimensional subspace, which is given by the top gradient eigenspace on a small public dataset. We provide a general sample complexity analysis on the public dataset for the gradient subspace identification problem and demonstrate that under certain low-dimensional assumptions the public sample complexity only grows logarithmically in $p$. Finally, we provide a theoretical analysis and empirical evaluations to show that our method can substantially improve the accuracy of DP-SGD in the high privacy regime (corresponding to low privacy loss $\epsilon$).
翻译:差异化私人 SGD (DP-SGD) 是解决不同私人实验风险最小化的最流行方法之一。 由于每次梯度更新、DP-SGD比例与环境维度之间的误差率、模型参数的数量、这种依赖性对于超分数模型来说可能很成问题,因为美元=gg n美元、培训样本的数量等都是这种模式。 私人机构风险管理的现有下限表明,在最糟糕的情况下,这种对美元美元的依赖是不可避免的。 在本文中,我们通过在深网络中利用低维的梯度空间结构,避免了对环境层面的依赖 -- -- 即深网的深网梯度梯度梯度梯度梯度梯度梯度梯度梯度梯度在培训过程中通常停留在低维次次空间。 我们建议,通过将噪音梯度梯度梯度梯度到低维次空间的亚空间来减少噪音,这是由小型公共数据集的顶级梯度梯度梯度标准基体空间提供的。 我们提供了对梯度子次空间识别问题公共数据集集的一般性抽样分析, 表明,在某种低维度的模型模型模型分析中,我们只能对低基的精确度标准进行。