Erasing concepts from large-scale text-to-image (T2I) diffusion models has become increasingly crucial due to the growing concerns over copyright infringement, offensive content, and privacy violations. In scalable applications, fine-tuning-based methods are time-consuming to precisely erase multiple target concepts, while real-time editing-based methods often degrade the generation quality of non-target concepts due to conflicting optimization objectives. To address this dilemma, we introduce SPEED, an efficient concept erasure approach that directly edits model parameters. SPEED searches for a null space, a model editing space where parameter updates do not affect non-target concepts, to achieve scalable and precise erasure. To facilitate accurate null space optimization, we incorporate three complementary strategies: Influence-based Prior Filtering (IPF) to selectively retain the most affected non-target concepts, Directed Prior Augmentation (DPA) to enrich the filtered retain set with semantically consistent variations, and Invariant Equality Constraints (IEC) to preserve key invariants during the T2I generation process. Extensive evaluations across multiple concept erasure tasks demonstrate that SPEED consistently outperforms existing methods in non-target preservation while achieving efficient and high-fidelity concept erasure, successfully erasing 100 concepts within only 5 seconds. Our code and models are available at: https://github.com/Ouxiang-Li/SPEED.
翻译:随着对版权侵权、不当内容及隐私侵犯问题的日益关注,从大规模文生图(T2I)扩散模型中擦除特定概念变得愈发重要。在可扩展应用中,基于微调的方法需要耗费大量时间才能精确擦除多个目标概念,而基于实时编辑的方法常因优化目标冲突导致非目标概念的生成质量下降。为解决这一困境,我们提出了SPEED——一种通过直接编辑模型参数实现高效概念擦除的方法。SPEED通过搜索零空间(即模型参数更新不影响非目标概念的编辑空间),实现可扩展且精确的概念擦除。为促进精确的零空间优化,我们整合了三种互补策略:基于影响力的先验过滤(IPF)以选择性保留受影响最显著的非目标概念,定向先验增强(DPA)通过语义一致的变体丰富过滤后的保留集,以及不变性等式约束(IEC)以保持T2I生成过程中的关键不变性。在多个概念擦除任务上的广泛评估表明,SPEED在非目标概念保留方面持续优于现有方法,同时实现了高效且高保真的概念擦除,仅需5秒即可成功擦除100个概念。我们的代码与模型已开源:https://github.com/Ouxiang-Li/SPEED。