To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) stochastic gradient descent method in which controlled noise is added to the gradients. The quality of the output synthetic samples can be adversely affected and the training of the network may not even converge in the presence of these noises. We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator, followed by a lower-dimensional DP-GAN with better convergent properties. Experimental results on standard datasets CIFAR10 and SVHN as well as on a facial landmark dataset for Autism screening show that our approach outperforms the standard DP-GAN method based on Inception Score, Fr\'echet Inception Distance, and classification accuracy under the same privacy guarantee.
翻译:为了在培训基因反转网络(GAN)时保护敏感数据,标准做法是使用有差异的私人(DP)梯度梯度下降法,在梯度上添加受控噪音,产出合成样品的质量可能受到不利影响,网络的培训甚至在这些噪音出现时可能无法汇合。我们提出有区别的私人模型转换(DPMI)方法,即私人数据首先通过公共发电机绘制到潜伏空间,然后是低维的DP-GAN,具有较好的聚合特性。标准数据集CIFAR10和SVHN的实验结果以及用于自闭症筛查的面部标志数据集显示,我们的方法超过了基于感应计、Fr\'echet感应距离和在同一隐私保障下分类准确性的标准DP-GAN方法。