Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), such that the attacked model performs well on benign samples, whereas its prediction will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger. Backdoor attack could happen when the training process is not fully controlled by the user, such as training on third-party datasets or adopting third-party models, which poses a new and realistic threat. Although backdoor learning is an emerging and rapidly growing research area, its systematic review, however, remains blank. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing backdoor attacks and defenses based on their characteristics, and provide a unified framework for analyzing poisoning-based backdoor attacks. Besides, we also analyze the relation between backdoor attacks and the relevant fields ($i.e.,$ adversarial attack and data poisoning), and summarize the benchmark datasets. Finally, we briefly outline certain future research directions relying upon reviewed works.
翻译:后门攻击意图将隐藏的后门攻击嵌入深层神经网络(DNNs),这样,被攻击的模式在良性样本上表现良好,而如果攻击者定义的触发器触发隐藏的后门攻击,则其预测会恶意改变。当培训过程没有完全由用户控制时,如第三方数据集培训或采用第三方模型,从而构成新的现实威胁,后门攻击可能发生。虽然后门学习是一个新兴和迅速增长的研究领域,但其系统审查仍然是空白的。在本文中,我们介绍该领域的第一次全面调查。我们根据现有后门攻击和防御的特点进行总结和分类,并为分析基于中毒的后门攻击提供一个统一框架。此外,我们还分析后门攻击和相关领域之间的关系(美元、对抗性攻击和数据中毒),并概述基准数据集。最后,我们简要概述了依靠被审查的工程的某些未来研究方向。