The size of deep learning models in artificial intelligence (AI) software is increasing rapidly, which hinders the large-scale deployment on resource-restricted devices (e.g., smartphones). To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance. However, the intrinsic defects in the big model may be inherited by the compressed one. Such defects may be easily leveraged by attackers, since the compressed models are usually deployed in a large number of devices without adequate protection. In this paper, we try to address the safe model compression problem from a safety-performance co-optimization perspective. Specifically, inspired by the test-driven development (TDD) paradigm in software engineering, we propose a test-driven sparse training framework called SafeCompress. By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm. Further, considering a representative attack, i.e., membership inference attack (MIA), we develop a concrete safe model compression mechanism, called MIA-SafeCompress. Extensive experiments are conducted to evaluate MIA-SafeCompress on five datasets for both computer vision and natural language processing tasks. The results verify the effectiveness and generalization of our method. We also discuss how to adapt SafeCompress to other attacks besides MIA, demonstrating the flexibility of SafeCompress.
翻译:人工智能软件(AI)深层学习模型的规模正在迅速扩大,这阻碍了大规模部署资源限制装置(例如智能手机)的安全模型压缩问题。为了缓解这一问题,AI软件压缩发挥着关键作用,目的是压缩模型大小,同时保持高性能。然而,大型模型的内在缺陷可能由压缩模型继承。这些缺陷可能很容易被攻击者利用,因为压缩模型通常在大量装置中部署,没有适当的保护。在本文中,我们试图从安全-性能共同优化的角度解决安全模型压缩问题。具体地说,在软件工程测试驱动的开发模式(TDD)的启发下,我们提议一个测试驱动的稀有培训框架,称为“安全压缩”。通过将攻击机制模拟安全测试,SafreedCompress可以自动将一个大模型压缩成一个小模型。此外,考虑到具有代表性的攻击,即:安全性攻击(IMA),我们开发了一个安全性压缩模型机制,称为MIA-SCommresspress。 大规模实验还用来评估我们一般攻击的自然效果。