Existing research on training-time attacks for deep neural networks (DNNs), such as backdoors, largely assume that models are static once trained, and hidden backdoors trained into models remain active indefinitely. In practice, models are rarely static but evolve continuously to address distribution drifts in the underlying data. This paper explores the behavior of backdoor attacks in time-varying models, whose model weights are continually updated via fine-tuning to adapt to data drifts. Our theoretical analysis shows how fine-tuning with fresh data progressively "erases" the injected backdoors, and our empirical study illustrates how quickly a time-varying model "forgets" backdoors under a variety of training and attack settings. We also show that novel fine-tuning strategies using smart learning rates can significantly accelerate backdoor forgetting. Finally, we discuss the need for new backdoor defenses that target time-varying models specifically.
翻译:关于深神经网络(如后门)培训时间攻击的现有研究(DNNs),主要假设模型一旦经过训练后是静态的,而被训练为模型的隐藏后门则无限期地保持活动状态。实际上,模型很少是静态的,但为了解决基础数据的分布漂移而不断演化。本文探讨了时间变化模型中的后门攻击行为,其模型重量通过微调不断更新以适应数据漂移。我们的理论分析表明,如何微调新数据逐渐“拉大”注入后门,而我们的经验研究则表明,在各种培训和攻击环境下,一个时间变化的“齿轮”后门模型是多么快。我们还表明,使用智能学习率的新微调战略可以大大加快后门遗忘的速度。最后,我们讨论了是否需要针对时间变化模型的新的后门防御。