Diffusion models have attracted attention in recent years as innovative generative models. In this paper, we investigate whether a diffusion model is resistant to a membership inference attack, which evaluates the privacy leakage of a machine learning model. We primarily discuss the diffusion model from the standpoints of comparison with a generative adversarial network (GAN) as conventional models and hyperparameters unique to the diffusion model, i.e., time steps, sampling steps, and sampling variances. We conduct extensive experiments with DDIM as a diffusion model and DCGAN as a GAN on the CelebA and CIFAR-10 datasets in both white-box and black-box settings and then confirm if the diffusion model is comparably resistant to a membership inference attack as GAN. Next, we demonstrate that the impact of time steps is significant and intermediate steps in a noise schedule are the most vulnerable to the attack. We also found two key insights through further analysis. First, we identify that DDIM is vulnerable to the attack for small sample sizes instead of achieving a lower FID. Second, sampling steps in hyperparameters are important for resistance to the attack, whereas the impact of sampling variances is quite limited.
翻译:近年来,传播模型作为一种创新的基因模型,引起了人们的注意。在本文中,我们调查扩散模型是否抵制会员推断攻击,该模型评价机器学习模型的隐私泄漏。我们主要从与基因对抗网络(GAN)比较的角度讨论扩散模型,作为扩散模型所特有的常规模型和超光谱,即时间步骤、取样步骤和抽样差异。我们用DDIM作为扩散模型进行广泛实验,用DCGAN作为CelibA和CIFAR-10数据集的GAN进行广泛实验,然后确认在白箱和黑箱设置中,扩散模型是否相对抵制会员推断攻击。接下来,我们证明时间步骤的影响是重大和中间步骤,对扩散模型的影响最容易受到攻击。我们还通过进一步的分析发现了两个关键见解。首先,我们发现DDIM作为小样本规模而不是达到较低的FID,DIM很容易受到攻击。第二,在超光谱仪中取样步骤对于抵抗攻击非常重要,而抽样的影响则有限。