Diffusion models are state-of-the-art deep learning empowered generative models that are trained based on the principle of learning forward and reverse diffusion processes via progressive noise-addition and denoising. To gain a better understanding of the limitations and potential risks, this paper presents the first study on the robustness of diffusion models against backdoor attacks. Specifically, we propose BadDiffusion, a novel attack framework that engineers compromised diffusion processes during model training for backdoor implantation. At the inference stage, the backdoored diffusion model will behave just like an untampered generator for regular data inputs, while falsely generating some targeted outcome designed by the bad actor upon receiving the implanted trigger signal. Such a critical risk can be dreadful for downstream tasks and applications built upon the problematic model. Our extensive experiments on various backdoor attack settings show that BadDiffusion can consistently lead to compromised diffusion models with high utility and target specificity. Even worse, BadDiffusion can be made cost-effective by simply finetuning a clean pre-trained diffusion model to implant backdoors. We also explore some possible countermeasures for risk mitigation. Our results call attention to potential risks and possible misuse of diffusion models.
翻译:传播模型是最新的深层学习增强能力的基因化模型,这些模型在通过渐进的噪音添加和去除等方法学习前方和反向扩散过程的原则基础上经过培训。为了更好地了解各种限制和潜在风险,本文件介绍了关于针对后门攻击的传播模型的稳健性的第一个研究报告。具体地说,我们提议了BadDive模型,这是一个工程师在后门植入示范培训中损害扩散过程的新攻击框架。在推论阶段,后门传播模型将表现得像一个未安装数据输入的原始生成器,同时在接收植入触发信号时错误地产生一些坏人设计的定向结果。这种关键风险对于下游任务和基于问题模型的应用可能非常可怕。我们在各种后门攻击环境进行的广泛实验表明,错误Difive可持续地影响扩散模型,且具有很高的效用和目标特性。更糟糕的是,如果只是微调一个清洁的、预先训练的传播模型到植入后门,那么坏的传播模型就会具有成本效益。我们还要探索一些可能的反措施,以减少风险。我们的成果要求注意潜在的风险和可能的滥用扩散模型。