Adversaries can embed backdoors in deep learning models by introducing backdoor poison samples into training datasets. In this work, we investigate how to detect such poison samples to mitigate the threat of backdoor attacks. First, we uncover a post-hoc workflow underlying most prior work, where defenders passively allow the attack to proceed and then leverage the characteristics of the post-attacked model to uncover poison samples. We reveal that this workflow does not fully exploit defenders' capabilities, and defense pipelines built on it are prone to failure or performance degradation in many scenarios. Second, we suggest a paradigm shift by promoting a proactive mindset in which defenders engage proactively with the entire model training and poison detection pipeline, directly enforcing and magnifying distinctive characteristics of the post-attacked model to facilitate poison detection. Based on this, we formulate a unified framework and provide practical insights on designing detection pipelines that are more robust and generalizable. Third, we introduce the technique of Confusion Training (CT) as a concrete instantiation of our framework. CT applies an additional poisoning attack to the already poisoned dataset, actively decoupling benign correlation while exposing backdoor patterns to detection. Empirical evaluations on 4 datasets and 14 types of attacks validate the superiority of CT over 11 baseline defenses.
翻译:通过将后门毒物样本引入培训数据集,我们调查如何检测这些毒物样本,以减轻后门袭击的威胁。首先,我们发现大多数先前工作背后的后热工作流程,其中维权者被动地允许袭击进行,然后利用攻击后模式的特征来发现毒物样本。我们发现,这一工作流程没有充分利用维权者的能力,而基于这一流程的防御管道在许多情况下容易发生失败或性能退化。第二,我们建议改变范式,促进一种积极主动的心态,让维权者积极主动地参与整个模式培训和毒物检测管道,直接实施和放大后门模式的独特特征,以便利毒物检测。在此基础上,我们制定一个统一框架,就设计更有力和可概括的检测管道提供实际见解。第三,我们引入了聚合训练技术(CT),作为我们框架的具体回旋。CT对已经中毒的数据集进行额外的中毒攻击,积极分解良性关联,同时将后门袭击的后门型模式暴露为检测。