We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models. This represents a serious issue for safety-critical applications, e.g. in medical diagnosis. We highlight and exploit parallels between stochastic gradient Langevin dynamics, a scalable Bayesian inference technique for training deep neural networks, and DP-SGD, in order to train differentially private, Bayesian neural networks with minor adjustments to the original (DP-SGD) algorithm. Our approach provides considerably more reliable uncertainty estimates than DP-SGD, as demonstrated empirically by a reduction in expected calibration error (MNIST $\sim{5}$-fold, Pediatric Pneumonia Dataset $\sim{2}$-fold).
翻译:我们发现,有差异的私人随机梯度梯度下降(DP-SGD)可产生校准差强、过于自信的深层学习模式,这是安全关键应用(例如医学诊断)的一个严重问题。我们强调并开发了随机梯度兰格文动态(一种用于培训深神经网络的可缩放贝叶斯推理技术)和DP-SGD(DP-SGD)之间的平行点,以便以对原始(DP-SGD)算法稍作调整的方式培训有差异的私人贝叶斯神经网络。 我们的方法提供了比DP-SGD(DP-SGD)更可靠的不确定性估计,从预期校准错误的减少(MNIST $\ sim{5} 倍,Pediaricic Pentomoni Dataset $\sim{2} 倍)的经验证明,我们的方法提供了比DP-SGD(DP-SGD)更可靠的不确定性估计值。