Deep neural networks have achieved impressive performance in a variety of tasks over the last decade, such as autonomous driving, face recognition, and medical diagnosis. However, prior works show that deep neural networks are easily manipulated into specific, attacker-decided behaviors in the inference stage by backdoor attacks which inject malicious small hidden triggers into model training, raising serious security threats. To determine the triggered neurons and protect against backdoor attacks, we exploit Shapley value and develop a new approach called Shapley Pruning (ShapPruning) that successfully mitigates backdoor attacks from models in a data-insufficient situation (1 image per class or even free of data). Considering the interaction between neurons, ShapPruning identifies the few infected neurons (under 1% of all neurons) and manages to protect the model's structure and accuracy after pruning as many infected neurons as possible. To accelerate ShapPruning, we further propose discarding threshold and $\epsilon$-greedy strategy to accelerate Shapley estimation, making it possible to repair poisoned models with only several minutes. Experiments demonstrate the effectiveness and robustness of our method against various attacks and tasks compared to existing methods.
翻译:过去十年来,深心神经网络在诸如自主驾驶、面部识别和医学诊断等各种任务中取得了令人印象深刻的成绩。然而,先前的工程表明,深心神经网络很容易被幕后攻击操纵成在推论阶段的特定攻击者决定的行为,幕后攻击将恶意小隐藏的小触发器注入模型培训,从而带来严重的安全威胁。为了确定触发的神经元并防范后门攻击,我们利用“影子”价值,并开发了一种名为Shapley Prurning(Shappruying)的新方法,在数据不足的情况下成功地从模型中减轻后门攻击(每类1个图像,甚至没有数据)。考虑到神经元之间的相互作用,ShapPruning识别了少数受感染的神经元(占所有神经元的1%以下),并设法在尽可能多受感染的神经元被切割后保护模型的结构和准确性。为了加速沙普普普普罗林运行,我们进一步建议采用抛弃阈值和$\epsilon-greedy战略,以加速沙匹估计,从而有可能用几分钟的时间来修复中毒模型,从而对比现有攻击的方法。