Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern. Our results indicate that existing MIAs designed for GANs or VAE are largely ineffective on diffusion models, either due to inapplicable scenarios (e.g., requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer distances between synthetic images and member images). To address this gap, we propose Step-wise Error Comparing Membership Inference (SecMI), a black-box MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep. SecMI follows the common overfitting assumption in MIA where member samples normally have smaller estimation errors, compared with hold-out samples. We consider both the standard diffusion models, e.g., DDPM, and the text-to-image diffusion models, e.g., Stable Diffusion. Experimental results demonstrate that our methods precisely infer the membership with high confidence on both of the two scenarios across six different datasets
翻译:在本文中,我们调查了扩散模型对成员推断攻击(MIAs)的脆弱性,这是一个共同的隐私问题。我们的结果表明,为GANs或VAE设计的现有MIAs在扩散模型上基本上无效,原因有二,有二,二,三,四,五,五,五,五,五,五,五,五,五,五,五,五,五,五,五,五,五,五,五,五,五,七,七,五,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,七,八,八,七,七,七,七,七,七,八,七,七,八,七,七,八,七,七,七,七,七,七,七,七,七,七,七,七,七,六,六,五,六,六,六,五,六,十,六,十,六,一,六,一,六,六,六,一,六,一,六,六,六,六,一,六,六,六,六,六,七,七,六,六,六,六,六,七,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,一,一,六,六,六,六,六,六,六,一,六,六,六,六,六,六,六,六,六,六,一,六,六,六,六,六,一,六,六,六,一,一,六,六,六,六,六,六,六,六,一,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,一,一,一,一,六,六,一,一,一,一,一,