Backdoor attacks represent one of the major threats to machine learning models. Various efforts have been made to mitigate backdoors. However, existing defenses have become increasingly complex and often require high computational resources or may also jeopardize models' utility. In this work, we show that fine-tuning, one of the most common and easy-to-adopt machine learning training operations, can effectively remove backdoors from machine learning models while maintaining high model utility. Extensive experiments over three machine learning paradigms show that fine-tuning and our newly proposed super-fine-tuning achieve strong defense performance. Furthermore, we coin a new term, namely backdoor sequela, to measure the changes in model vulnerabilities to other attacks before and after the backdoor has been removed. Empirical evaluation shows that, compared to other defense methods, super-fine-tuning leaves limited backdoor sequela. We hope our results can help machine learning model owners better protect their models from backdoor threats. Also, it calls for the design of more advanced attacks in order to comprehensively assess machine learning models' backdoor vulnerabilities.
翻译:后门攻击是对机器学习模式的主要威胁之一。 已经做出了各种努力来减少后门攻击。 但是,现有的防御已经变得日益复杂,往往需要大量的计算资源,或者还可能危及模型的效用。 在这项工作中,我们证明微调是最常见和最容易接受的机器学习培训行动之一,它能够有效地将后门从机器学习模式中清除出来,同时保持高模效。 对三个机器学习模式的广泛实验表明,微调和我们新提议的超级调整能够取得强大的防御性。 此外,我们创造了一个新的术语,即后门塞克拉,以衡量模型在后门消除之前和之后与其他攻击的弱点的变化。 经验性评估表明,与其他防御方法相比,超微调使后门的后门学习叶受限制。 我们希望我们的成果能够帮助机器学习模型的主人更好地保护其模型免受后门威胁。 另外,它呼吁设计更先进的攻击,以便全面评估机器后门学习模式的脆弱性。