The combination of deep neural networks and Differential Privacy has been of increasing interest in recent years, as it offers important data protection guarantees to the individuals of the training datasets used. However, using Differential Privacy in the training of neural networks comes with a set of shortcomings, like a decrease in validation accuracy and a significant increase in the use of resources and time in training. In this paper, we examine super-convergence as a way of greatly increasing training speed of differentially private neural networks, addressing the shortcoming of high training time and resource use. Super-convergence allows for acceleration in network training using very high learning rates, and has been shown to achieve models with high utility in orders of magnitude less training iterations than conventional ways. Experiments in this paper show that this order-of-magnitude speedup can also be seen when combining it with Differential Privacy, allowing for higher validation accuracies in much fewer training iterations compared to non-private, non-super convergent baseline models. Furthermore, super-convergence is shown to improve the privacy guarantees of private models.
翻译:近年来,深神经网络和差异隐私的结合越来越引起人们的兴趣,因为它为所使用培训数据集的个人提供了重要的数据保护保障。然而,在神经网络培训中使用差异隐私会带来一系列缺陷,如鉴定准确性下降,培训资源和时间的利用显著增加。在本文件中,我们研究了超级趋同,作为大幅提高差异私人神经网络培训速度的一种方法,以解决高培训时间和资源使用不足的问题。超级趋同使得网络培训加速使用非常高的学习率,并表明在规模上实现高度实用的模式,比传统方法少培训重复。本文的实验表明,在与差异隐私相结合时,也可以看到这种超常速度,因此与非私人、非超级趋同型的基线模型相比,在培训循环中可以提高认证力度。此外,超级趋同还表明,超级趋同可以改进私人模型的隐私保障。