The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual. The predominant approach to differentially private training of neural networks is DP-SGD, which relies on norm-based gradient clipping as a method for bounding sensitivity, followed by the addition of appropriately calibrated Gaussian noise. In this work we propose NeuralDP, a technique for privatising activations of some layer within a neural network, which by the post-processing properties of differential privacy yields a differentially private network. We experimentally demonstrate on two datasets (MNIST and Pediatric Pneumonia Dataset (PPD)) that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
翻译:在培训深神经网络时应用差异隐私权,有可能允许大规模(分散)使用敏感数据,同时为个人提供严格的隐私保障。对神经网络进行差异性私人培训的主要办法是DP-SGD,它依赖基于规范的梯度剪切作为约束敏感度的方法,其次是添加经过适当校准的高斯噪音。在这项工作中,我们提出了神经DP,这是将神经网络中某些层的激活私有化的一种技术,通过处理后的差异性隐私特性,产生了一种差异性私人网络。我们在两个数据集(MNIST和小儿肺部数据集)上实验性地证明,我们的方法与DP-SGD相比,大大改善了隐私-效用交易。