Normalizing flow models have risen as a popular solution to the problem of density estimation, enabling high-quality synthetic data generation as well as exact probability density evaluation. However, in contexts where individuals are directly associated with the training data, releasing such a model raises privacy concerns. In this work, we propose the use of normalizing flow models that provide explicit differential privacy guarantees as a novel approach to the problem of privacy-preserving density estimation. We evaluate the efficacy of our approach empirically using benchmark datasets, and we demonstrate that our method substantially outperforms previous state-of-the-art approaches. We additionally show how our algorithm can be applied to the task of differentially private anomaly detection.
翻译:标准化的流量模型已逐渐成为解决密度估计问题的流行办法,使高质量的合成数据生成和准确的概率密度评估成为了大众的解决方案。然而,在个人与培训数据直接相关的情况下,发布这种模型会引起隐私问题。在这项工作中,我们提议使用标准化的流量模型,提供明确的差异隐私保障,作为解决隐私保护密度估算问题的新办法。我们用基准数据集对我们的方法进行实证评估,并证明我们的方法大大优于以往最先进的方法。我们还展示了如何将我们的算法应用于差异化的私人异常检测任务。