While text-to-image synthesis currently enjoys great popularity among researchers and the general public, the security of these models has been neglected so far. Many text-guided image generation models rely on pre-trained text encoders from external sources, and their users trust that the retrieved models will behave as promised. Unfortunately, this might not be the case. We introduce backdoor attacks against text-guided generative models and demonstrate that their text encoders pose a major tampering risk. Our attacks only slightly alter an encoder so that no suspicious model behavior is apparent for image generations with clean prompts. By then inserting a single non-Latin character into the prompt, the adversary can trigger the model to either generate images with pre-defined attributes or images following a hidden, potentially malicious description. We empirically demonstrate the high effectiveness of our attacks on Stable Diffusion and highlight that the injection process of a single backdoor takes less than two minutes. Besides phrasing our approach solely as an attack, it can also force an encoder to forget phrases related to certain concepts, such as nudity or violence, and help to make image generation safer.
翻译:虽然目前这些模型的文本到图像合成在研究人员和公众中非常受欢迎,但这些模型的安全性迄今一直被忽视。 许多文本引导图像生成模型依赖外部来源的经过事先训练的文本编码器,它们的用户相信检索到的模型将按承诺行事。 不幸的是,情况可能并非如此。 我们对文本引导的基因化模型进行后门攻击,并表明它们的文本编码器构成重大的篡改风险。 我们的攻击只是略微改变一个编码器,这样,对于有清洁提示的图像世代来说,就不会出现可疑的模型行为。 然后,在提示中插入一个单一的非拉丁字符,那么对手就可以触发模型,要么在隐藏的、潜在的恶意描述之后生成带有预先定义的属性或图像的图像。我们从经验上表明,我们袭击稳定的Difluction的高度效力,并强调单个后门的注入过程需要不到两分钟的时间。 除了仅仅作为攻击来描述我们的方法外,它还可能迫使一个编码器忘记与某些概念有关的词语,例如裸体或暴力,以及帮助使图像产生更安全。