We propose a new framework of synthesizing data using deep generative models in a differentially private manner. Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion, such that training deep generative models is possible without re-using the original data. Hence, no extra privacy costs or model constraints are incurred, in contrast to popular approaches such as Differentially Private Stochastic Gradient Descent (DP-SGD), which, among other issues, causes degradation in privacy guarantees as the training iteration increases. We demonstrate a realization of our framework by making use of the characteristic function and an adversarial re-weighting objective, which are of independent interest as well. Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy.
翻译:我们建议一个新的框架,以不同私人的方式利用深层基因模型来综合数据; 在我们的框架内,敏感数据经过严格隐私保障的清洁,一次性地进行严格的隐私保障,这样就可以在不重复使用原始数据的情况下对深层基因模型进行培训; 因此,不产生额外的隐私费用或模式限制,这与诸如“差异私人小吃后代”(DP-SGD)等流行方法形成对照,后者,除其他问题外,随着培训的重复增加,造成隐私保障的退化; 我们通过利用特性功能和对称重重重标码目标,展示了我们框架的实现,这也是一项独立感兴趣的目标。 我们的提议有关于业绩的理论保证,对多个数据集的经验评价表明,我们的方法在合理的隐私水平上优于其他方法。