Deep learning models achieve excellent performance in numerous machine learning tasks. Yet, they suffer from security-related issues such as adversarial examples and poisoning (backdoor) attacks. A deep learning model may be poisoned by training with backdoored data or by modifying inner network parameters. Then, a backdoored model performs as expected when receiving a clean input, but it misclassifies when receiving a backdoored input stamped with a pre-designed pattern called "trigger". Unfortunately, it is difficult to distinguish between clean and backdoored models without prior knowledge of the trigger. This paper proposes a backdoor detection method by utilizing a special type of adversarial attack, universal adversarial perturbation (UAP), and its similarities with a backdoor trigger. We observe an intuitive phenomenon: UAPs generated from backdoored models need fewer perturbations to mislead the model than UAPs from clean models. UAPs of backdoored models tend to exploit the shortcut from all classes to the target class, built by the backdoor trigger. We propose a novel method called Universal Soldier for Backdoor detection (USB) and reverse engineering potential backdoor triggers via UAPs. Experiments on 345 models trained on several datasets show that USB effectively detects the injected backdoor and provides comparable or better results than state-of-the-art methods.
翻译:深层学习模式在许多机器学习任务中取得了优异的成绩。 然而, 深层学习模式在很多机器学习任务中取得了卓越的成绩 。 但是, 它们却在很多与安全有关的问题上, 如对抗性实例和中毒( 后门) 袭击 。 深层学习模式可能会通过使用后门数据培训或修改内部网络参数而中毒 。 然后, 深层学习模式在接收清洁输入时会像预期的那样发挥作用, 但是当接收带有预先设计模式的称为“ 触发” 的后门输入时, 却会错误地分类。 不幸的是, 后门模型往往会利用由后门触发器建立的从所有舱到目标舱的捷径。 本文提出一种新颖的方法, 即使用特殊类型的对抗性攻击、 普遍的对抗性渗透( UAP) 及其与后门触发器的相似性。 我们观察到一种直觉现象: 由后门模型产生的 UAP 生成的UAP 和反向导的后工程潜在触发器, 提供经培训的数个后门测试结果。