Recently backdoor attack has become an emerging threat to the security of deep neural network (DNN) models. To date, most of the existing studies focus on backdoor attack against the uncompressed model; while the vulnerability of compressed DNNs, which are widely used in the practical applications, is little exploited yet. In this paper, we propose to study and develop Robust and Imperceptible Backdoor Attack against Compact DNN models (RIBAC). By performing systematic analysis and exploration on the important design knobs, we propose a framework that can learn the proper trigger patterns, model parameters and pruning masks in an efficient way. Thereby achieving high trigger stealthiness, high attack success rate and high model efficiency simultaneously. Extensive evaluations across different datasets, including the test against the state-of-the-art defense mechanisms, demonstrate the high robustness, stealthiness and model efficiency of RIBAC. Code is available at https://github.com/huyvnphan/ECCV2022-RIBAC
翻译:最近后门攻击已成为对深神经网络模型安全的一种新出现的威胁。迄今为止,大多数现有研究侧重于对未压缩模型的后门攻击;虽然在实际应用中广泛使用的压缩DNN的脆弱程度很少被利用。在本文件中,我们提议研究和开发对契约DNN模型的强力和隐性后门攻击(RIBAC) 。通过对重要的设计 knobs 进行系统分析和探索,我们提出了一个框架,可以有效地了解适当的触发模式、模型参数和切割面罩。通过同时实现高触发性、高攻击成功率和高模型效率,对不同的数据集进行广泛的评价,包括对最先进的防御机制进行测试,显示RIBAC 的高度坚固性、隐秘性和示范效率。我们可在https://github.com/huyvnphan/ECCV20222-RIBAC查阅代码。