Currently, there is a burgeoning demand for deploying deep learning (DL) models on ubiquitous edge Internet of Things (IoT) devices attributed to their low latency and high privacy preservation. However, DL models are often large in size and require large-scale computation, which prevents them from being placed directly onto IoT devices, where resources are constrained and 32-bit floating-point (float-32) operations are unavailable. Commercial framework (i.e., a set of toolkits) empowered model quantization is a pragmatic solution that enables DL deployment on mobile devices and embedded systems by effortlessly post-quantizing a large high-precision model (e.g., float-32) into a small low-precision model (e.g., int-8) while retaining the model inference accuracy. However, their usability might be threatened by security vulnerabilities. This work reveals that the standard quantization toolkits can be abused to activate a backdoor. We demonstrate that a full-precision backdoored model which does not have any backdoor effect in the presence of a trigger -- as the backdoor is dormant -- can be activated by the default i) TensorFlow-Lite (TFLite) quantization, the only product-ready quantization framework to date, and ii) the beta released PyTorch Mobile framework. When each of the float-32 models is converted into an int-8 format model through the standard TFLite or Pytorch Mobile framework's post-training quantization, the backdoor is activated in the quantized model, which shows a stable attack success rate close to 100% upon inputs with the trigger, while it behaves normally upon non-trigger inputs. This work highlights that a stealthy security threat occurs when an end user utilizes the on-device post-training model quantization frameworks, informing security researchers of cross-platform overhaul of DL models post quantization even if these models pass front-end backdoor inspections.
翻译:目前,对在无处不在的端端端Tings(IoT)互联网上部署深学习(DL)模型的需求正在迅速增长,原因是其低延迟和高隐私保护。然而,DL模型的大小往往很大,需要大规模计算,从而无法直接放置在资源受限和32比特浮动点(Float-32)操作的IoT设备上。商业框架(即一套工具包)已授权的模型滴释是一个务实的解决方案,使DL在移动装置和嵌入系统上部署DL(IoT),这是不费不力的后端安装。DL模型的快速转换后端输入模型(例如,浮点32)到一个小的低精确模型(e. int-8),同时保留模型的推断准确性。但是,它们的可用性可能会受到安全脆弱性的威胁。这项工作表明,如果在启动后门时,标准滴释模型可能会被滥用。我们证明,在后方化时,完全的后端蒸化模型不会产生任何后台式攻击效果,在Stenloal-ral-la-lake-lake-laft lax a laft lift lift la delift fider lax laft laft laft laft laft fider fider lax laft lax lax lax lax lax lax lax la lax lax lax