We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning. Our approach clips each trainable layer's inputs (during the forward pass) and its upstream gradients (during the backward pass) to ensure bounded global sensitivity for the layer's gradient; this combination replaces the gradient clipping step in existing DP-SGD variants. Our approach is simple to implement in existing deep learning frameworks. The results of our empirical evaluation demonstrate that backpropagation clipping provides higher accuracy at lower values for the privacy parameter $\epsilon$ compared to previous work. We achieve 98.7% accuracy for MNIST with $\epsilon = 0.07$ and 74% accuracy for CIFAR-10 with $\epsilon = 3.64$.
翻译:我们提出反向调整剪切,这是用于隐私保护深层学习的一种新型的、有差异的私人随机梯度下行(DP-SGD)的新变种。我们的方法剪切了每个可训练层的投入(在前传期间)及其上游梯度(在后传期间),以确保该层梯度的全球敏感度;这一组合取代了现有的DP-SGD变量中的梯度剪切步骤。我们的方法很简单,可以在现有的深层学习框架中实施。我们的经验性评估结果表明,与以前的工作相比,后传剪切为隐私参数提供了更低的精度($-epslon$ ) 。我们实现了MNIST98.7%的精度, $=0.07美元, CIFAR-10的精度为74%, $-epslon=3.64美元。