In a backdoor attack on a machine learning model, an adversary produces a model that performs well on normal inputs but outputs targeted misclassifications on inputs containing a small trigger pattern. Model compression is a widely-used approach for reducing the size of deep learning models without much accuracy loss, enabling resource-hungry models to be compressed for use on resource-constrained devices. In this paper, we study the risk that model compression could provide an opportunity for adversaries to inject stealthy backdoors. We design stealthy backdoor attacks such that the full-sized model released by adversaries appears to be free from backdoors (even when tested using state-of-the-art techniques), but when the model is compressed it exhibits highly effective backdoors. We show this can be done for two common model compression techniques -- model pruning and model quantization. Our findings demonstrate how an adversary may be able to hide a backdoor as a compression artifact, and show the importance of performing security tests on the models that will actually be deployed not their precompressed version.
翻译:在对机器学习模型的后门攻击中,对手产生一种模型,它很好地利用正常投入,但产出针对的是含有小触发模式的投入的分类错误。模型压缩是一种广泛使用的方法,用于缩小深层学习模型的大小,而没有太多准确性损失,使资源饥饿模型能够压缩,用于资源限制装置。在本文中,我们研究了模型压缩可能为对手提供输入隐形后门的机会的风险。我们设计了隐形后门攻击,使对手释放的全尺寸模型似乎从后门(即使使用最先进的技术进行测试)中自由出来,但是当模型压缩时,它展示出非常有效的后门。我们展示了两种通用的模型压缩技术,即模型裁剪裁和模型四分化,可以做到这一点。我们的研究结果表明,对手如何能够将后门隐藏成压缩工艺品,并表明对实际部署的模型进行安全测试的重要性,而不是其预压缩的版本。