Bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs). For high-level DNN models running on deep learning (DL) frameworks like PyTorch, extensive BFAs have been used to flip bits in model weights and shown effective. Defenses have also been proposed to guard model weights. However, DNNs are increasingly compiled into DNN executables by DL compilers to leverage hardware primitives. These executables manifest distinct computation paradigms; existing research fails to accurately capture and expose the BFA surfaces on DNN executables. To this end, we launch the first systematic study of BFAs on DNN executables. Prior BFAs are limited to attacking model weights and assume a strong whitebox attacker with full knowledge of victim model weights, which is unrealistic as weights are often confidential. In contrast, we find that BFAs on DNN executables can achieve high effectiveness by exploiting the model structure (usually stored in the executable code), which only requires knowing the (often public) model structure. Importantly, such structure-based BFAs are pervasive, transferable, and more severe in DNN executables. They also slip past existing defenses. To demonstrate the new attack surfaces, we assume a weak and more realistic attacker with no knowledge of victim model weights. We design an automated tool to identify vulnerable bits in victim executables with high confidence (70% vs. baseline 2%). We show on DDR4 DRAM that only 1.4 flips on average are needed to fully downgrade the accuracy of victim models, including quantized ones which could require 23x more flips previously, to random guesses. We comprehensively evaluate 16 DNN executables, covering large-scale models trained on commonly-used datasets compiled by the two most popular DL compilers. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.
翻译:暂无翻译