While differential privacy and gradient compression are separately well-researched topics in machine learning, the study of interaction between these two topics is still relatively new. We perform a detailed empirical study on how the Gaussian mechanism for differential privacy and gradient compression jointly impact test accuracy in deep learning. The existing literature in gradient compression mostly evaluates compression in the absence of differential privacy guarantees, and demonstrate that sufficiently high compression rates reduce accuracy. Similarly, existing literature in differential privacy evaluates privacy mechanisms in the absence of compression, and demonstrates that sufficiently strong privacy guarantees reduce accuracy. In this work, we observe while gradient compression generally has a negative impact on test accuracy in non-private training, it can sometimes improve test accuracy in differentially private training. Specifically, we observe that when employing aggressive sparsification or rank reduction to the gradients, test accuracy is less affected by the Gaussian noise added for differential privacy. These observations are explained through an analysis how differential privacy and compression effects the bias and variance in estimating the average gradient. We follow this study with a recommendation on how to improve test accuracy under the context of differentially private deep learning and gradient compression. We evaluate this proposal and find that it can reduce the negative impact of noise added by differential privacy mechanisms on test accuracy by up to 24.6%, and reduce the negative impact of gradient sparsification on test accuracy by up to 15.1%.
翻译:虽然不同的隐私和梯度压缩是机器学习中分别研究的很好的专题,但这两个专题之间的互动研究仍然相对新。我们进行了一项详细的实证研究,研究如何在深层学习中共同测试影响测试精确度。斜度压缩中的现有文献大多评估压缩,而没有不同的隐私保障,并表明相当高的压缩率降低了准确性。同样,不同隐私的现有文献评估隐私机制在没有压缩的情况下评估隐私机制,并表明足够强的隐私保障降低了准确性。在这项工作中,我们观察到,尽管梯度压缩通常对非私人培训的测试准确性产生消极影响,但有时可以提高差异性私人培训的测试准确性。具体地说,我们观察到,在采用积极的放大度或降级以降低梯度时,测试准确性不会受到差异隐私所添加的高度噪音的影响。这些观察通过分析隐私差异性和压缩在估计平均梯度时影响偏差和差异性。我们跟踪这项研究,建议如何在差异性私人深度学习和梯度压缩的背景下提高测试准确性。我们评估这项提案,通过测试24度的精确度机制降低低度的精确度。我们评估了对15度的精确度的精确性,发现它可以降低低度。