In this paper, we present a simple yet surprisingly effective technique to induce "selective amnesia" on a backdoored model. Our approach, called SEAM, has been inspired by the problem of catastrophic forgetting (CF), a long standing issue in continual learning. Our idea is to retrain a given DNN model on randomly labeled clean data, to induce a CF on the model, leading to a sudden forget on both primary and backdoor tasks; then we recover the primary task by retraining the randomized model on correctly labeled clean data. We analyzed SEAM by modeling the unlearning process as continual learning and further approximating a DNN using Neural Tangent Kernel for measuring CF. Our analysis shows that our random-labeling approach actually maximizes the CF on an unknown backdoor in the absence of triggered inputs, and also preserves some feature extraction in the network to enable a fast revival of the primary task. We further evaluated SEAM on both image processing and Natural Language Processing tasks, under both data contamination and training manipulation attacks, over thousands of models either trained on popular image datasets or provided by the TrojAI competition. Our experiments show that SEAM vastly outperforms the state-of-the-art unlearning techniques, achieving a high Fidelity (measuring the gap between the accuracy of the primary task and that of the backdoor) within a few minutes (about 30 times faster than training a model from scratch using the MNIST dataset), with only a small amount of clean data (0.1% of training data for TrojAI models).
翻译:在本文中,我们展示了一种简单而令人惊讶的有效技术,在后门模型中诱发“选择性失忆”的“选择性失忆”的后门模型。我们的方法称为SEAM,其灵感来自灾难性的遗忘问题,这是一个长期的不断学习问题。我们的想法是在随机贴标签的清洁数据上对给定的 DNN 模型进行再培训,从而在模型中引起一个CF,导致在初级和后门任务上突然忘记CF;然后,我们通过重新培训关于正确标签的清洁数据的随机模型来恢复首要任务。我们通过模拟未学习过程来分析SEAM,用Neural Tangent Kernel 进行持续学习并进一步接近DNN,以测量CFCF。我们的分析表明,在没有触发的投入的情况下,我们随机贴标签的方法实际上将CFN 最大化在未知的后门上,并且还在网络中保留了某些特性提取功能,以便能够快速恢复主要任务。我们进一步评估SEAM,在数据污染和培训操纵攻击下,超过数千个模型,要么是用大众图像数据集,要么是用Negraphet数据设置,要么是利用TroAI的快速的精度,要么是高时间, 。我们的数据实验显示SEART在30级的进度中完成一个高时, 。