There is currently a burgeoning demand for deploying deep learning (DL) models on ubiquitous edge Internet of Things devices attributing to their low latency and high privacy preservation. However, DL models are often large in size and require large-scale computation, which prevents them from being placed directly onto IoT devices where resources are constrained and 32-bit floating-point operations are unavailable. Model quantization is a pragmatic solution, which enables DL deployment on mobile devices and embedded systems by effortlessly post-quantizing a large high-precision model into a small low-precision model while retaining the model inference accuracy. This work reveals that the standard quantization operation can be abused to activate a backdoor. We demonstrate that a full-precision backdoored model that does not have any backdoor effect in the presence of a trigger -- as the backdoor is dormant -- can be activated by the default TensorFlow-Lite quantization, the only product-ready quantization framework to date. We ascertain that all trained float-32 backdoored models exhibit no backdoor effect even in the presence of trigger inputs. State-of-the-art frontend detection approaches, such as Neural Cleanse and STRIP, fail to identify the backdoor in the float-32 models. When each of the float-32 models is converted into an int-8 format model through the standard TFLite post-training quantization, the backdoor is activated in the quantized model, which shows a stable attack success rate close to 100% upon inputs with the trigger, while behaves normally upon non-trigger inputs. This work highlights that a stealthy security threat occurs when end users utilize the on-device post-training model quantization toolkits, informing security researchers of cross-platform overhaul of DL models post quantization even if they pass frontend inspections.
翻译:目前,在无处不在的边缘互联网上部署深学习(DL)模型的需求急剧增加,原因是其低延迟和高隐私保护。然而,DL模型的大小往往很大,需要大规模计算,从而无法直接放置在资源受限和32位浮点操作无法提供的 IoT 设备上。模型量化是一个务实的解决方案,它使得DL在移动设备和嵌入系统中的部署能够不费力地将大型高精度模型安装到一个小型低精度的低精度模型中,同时保留模型的精确度。这项工作表明,标准四分解模型的大小往往很大,可以被滥用来启动后门。我们证明,完全精确的后门模型不会在触发因素时产生后门效应(因为后门正在休眠) 。如果默认的TensorFlor-Flor-Lite模型的变异化时,它是唯一在后门前准备的变异度框架。我们确定,所有经过训练的三十二号后门型模型在前门非感推价输入时都会被滥用。