Text-to-image generation models have recently attracted unprecedented attention as they unlatch imaginative applications in all areas of life. However, developing such models requires huge amounts of data that might contain privacy-sensitive information, e.g., face identity. While privacy risks have been extensively demonstrated in the image classification and GAN generation domains, privacy risks in the text-to-image generation domain are largely unexplored. In this paper, we perform the first privacy analysis of text-to-image generation models through the lens of membership inference. Specifically, we propose three key intuitions about membership information and design four attack methodologies accordingly. We conduct comprehensive evaluations on two mainstream text-to-image generation models including sequence-to-sequence modeling and diffusion-based modeling. The empirical results show that all of the proposed attacks can achieve significant performance, in some cases even close to an accuracy of 1, and thus the corresponding risk is much more severe than that shown by existing membership inference attacks. We further conduct an extensive ablation study to analyze the factors that may affect the attack performance, which can guide developers and researchers to be alert to vulnerabilities in text-to-image generation models. All these findings indicate that our proposed attacks pose a realistic privacy threat to the text-to-image generation models.
翻译:文本到图像生成模型最近引起了前所未有的注意,因为它们在生活的所有领域都缺乏富有想象力的应用。然而,开发这些模型需要大量的数据,这些数据可能包含对隐私敏感的信息,例如面对面身份。虽然隐私风险已经在图像分类和GAN生成域中得到了广泛展示,但文本到图像生成域的隐私风险基本上尚未探索。在本文件中,我们通过成员推论的视角对文本到图像生成模型进行第一次隐私分析。具体地说,我们提出了关于成员资格信息的三个关键直觉,并相应设计了四种攻击方法。我们全面评估了两种主流文本到图像生成模型,包括顺序到序列的模型和基于传播的模型。实证结果表明,所有拟议的袭击都可以取得显著的性能,在某些情况下甚至接近于1的准确度,因此相应的风险比现有成员推论攻击所显示的风险要严重得多。我们进一步进行了广泛的对比研究,以分析可能影响袭击性表现的因素,从而可以指导开发者和研究人员对两种主流文本生成模型进行全面评估,包括顺序到序列模型和基于传播模式的模型。实情性地表明,我们所拟生成的文本到威胁生成模型的脆弱性。