Differential privacy (DP) provides a formal privacy guarantee that prevents adversaries with access to machine learning models from extracting information about individual training points. Differentially private stochastic gradient descent (DPSGD) is the most popular training method with differential privacy in image recognition. However, existing DPSGD schemes lead to significant performance degradation, which prevents the application of differential privacy. In this paper, we propose an efficient differential privacy deep learning which accepts a candidate update with a probability that depends both on the update quality and on the number of iterations. Through this random update screening, we make the differentially private gradient descent proceed in the right direction in each iteration, and result in a more accurate model finally. In our experiments, under the same hyperparameters, our scheme achieves test accuracies 98.35%, 87.41% and 60.92% on datasets MNIST, FashionMNIST and CIFAR10, respectively, compared to the state-of-the-art result of 98.12%, 86.33% and 59.34%. Under the freely adjusted hyperparameters, our scheme achieves even higher accuracies, 98.89%, 88.50% and 64.17%. We believe that our method has a great contribution for closing the accuracy gap between private and non-private image classification.
翻译:差异隐私(DP)提供了一种正式的隐私保障(DP), 防止使用机器学习模型的对手获取关于个人培训点的信息。 差异私人随机梯度下降(DPSGD)是最受欢迎的培训方法,在图像识别方面有不同的隐私。 但是,现有的DPSGD计划导致显著的性能退化,从而防止了差异隐私的应用。 在本文中,我们建议了高效的差异隐私深层次学习,接受候选人更新的可能性取决于更新质量和迭代数量。通过这次随机更新筛查,我们使差异私人梯度下降在每次迭代中朝着正确的方向发展,最终形成一个更准确的模式。 在我们的实验中,在相同的超参数下,我们的计划在MNIST、FashionMNIST和CIF10的数据集上,分别测试了98.12%、86.33%和59.34 %的默认隐私更新。 在经过自由调整的超标准下,我们的计划实现了更高水平的超值(98. 7 % ) 和 私人图像分类之间,我们相信有更高比例的保密率。