Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. This applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable. In this paper we apply \textit{directional privacy}, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of \textit{angular distance} so that gradient direction is broadly preserved. We show that this provides $\epsilon d$-privacy for deep learning training, rather than the $(\epsilon, \delta)$-privacy of the Gaussian mechanism; and that experimentally, on key datasets, the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off.
翻译:差异性私人软体渐变源(DP-SGD)是运用隐私来培训深层学习模型的关键方法。 这对训练期间的梯度适用等热带高斯噪声, 梯度可以使这些梯度在任何方向上受到干扰, 损害效用。 但是, 模型DP可以提供基于任意性指标的替代机制, 而这种任意性指标可能更合适 。 在本文中, 我们通过基于 von Mises- Fisher (VMF) 分布的机制, 应用\ textit{ agrogal 距离} 来渗透梯度梯度, 这样梯度方向可以被广泛保存 。 我们显示, 这为深层学习培训提供了$\ epslon d- privacy, 而不是高斯机制的$( epslon,\delta)- privacyy; 在关键数据集上, VMF机制可以比高斯在公共设施交易中比高斯安。