Recent years have witnessed the tremendous success of diffusion models in data synthesis. However, when diffusion models are applied to sensitive data, they also give rise to severe privacy concerns. In this paper, we systematically present the first study about membership inference attacks against diffusion models, which aims to infer whether a sample was used to train the model. Two attack methods are proposed, namely loss-based and likelihood-based attacks. Our attack methods are evaluated on several state-of-the-art diffusion models, over different datasets in relation to privacy-sensitive data. Extensive experimental evaluations show that our attacks can achieve remarkable performance. Furthermore, we exhaustively investigate various factors which can affect attack performance. Finally, we also evaluate the performance of our attack methods on diffusion models trained with differential privacy.
翻译:近年来,数据综合的传播模式取得了巨大成功,然而,在对敏感数据应用传播模式时,它们也引起了严重的隐私问题。在本文中,我们系统地介绍了关于对传播模式进行成员推论攻击的第一份研究报告,目的是推断是否使用了样本来培训该模式。提出了两种攻击方法,即以损失和可能性为基础的攻击。我们的攻击方法以若干最先进的传播模式来评价,而不是与隐私敏感数据有关的不同数据集。广泛的实验性评估表明,我们的攻击能够取得显著的性能。此外,我们详尽地调查了可能影响攻击性表现的各种因素。最后,我们还评估了我们对以不同隐私方式训练的传播模式的攻击方法的性能。