While text-to-image synthesis currently enjoys great popularity among researchers and the general public, the security of these models has been neglected so far. Many text-guided image generation models rely on pre-trained text encoders from external sources, and their users trust that the retrieved models will behave as promised. Unfortunately, this might not be the case. We introduce backdoor attacks against text-guided generative models and demonstrate that their text encoders pose a major tampering risk. Our attacks only slightly alter an encoder so that no suspicious model behavior is apparent for image generations with clean prompts. By then inserting a single character trigger into the prompt, e.g., a non-Latin character or emoji, the adversary can trigger the model to either generate images with pre-defined attributes or images following a hidden, potentially malicious description. We empirically demonstrate the high effectiveness of our attacks on Stable Diffusion and highlight that the injection process of a single backdoor takes less than two minutes. Besides phrasing our approach solely as an attack, it can also force an encoder to forget phrases related to certain concepts, such as nudity or violence, and help to make image generation safer.
翻译:虽然目前这些模型的文本到图像合成在研究人员和一般公众中非常受欢迎,但这些模型的安全性迄今一直被忽视。 许多文本引导图像生成模型依赖外部来源的经过事先训练的文本编码器, 它们的用户相信检索的模型将按承诺行事。 不幸的是, 情况可能并非如此。 我们对文本引导的基因模型进行后门攻击, 并表明它们的文本编码器构成重大的篡改风险。 我们的攻击只略微改变了一个编码器, 以便没有可疑的模型行为, 使图像代代际使用干净的提示器。 然后, 在提示时插入一个单一字符触发器, 例如, 非拉丁字符或moji, 对手可以触发该模型, 要么生成带有预先界定的属性的图像, 或者在隐藏的、 潜在的恶意描述后生成图像。 我们通过实验来证明我们对Stable Difluction攻击的高度有效, 并且强调单个后门的注入过程需要不到两分钟。 除了简单地描述我们的方法外, 还可以迫使编译者忘记某些概念的短语, 例如更安全或暴力, 。</s>