Recently developed adversarial weight attack, a.k.a. bit-flip attack (BFA), has shown enormous success in compromising Deep Neural Network (DNN) performance with an extremely small amount of model parameter perturbation. To defend against this threat, we propose RA-BNN that adopts a complete binary (i.e., for both weights and activation) neural network (BNN) to significantly improve DNN model robustness (defined as the number of bit-flips required to degrade the accuracy to as low as a random guess). However, such an aggressive low bit-width model suffers from poor clean (i.e., no attack) inference accuracy. To counter this, we propose a novel and efficient two-stage network growing method, named Early-Growth. It selectively grows the channel size of each BNN layer based on channel-wise binary masks training with Gumbel-Sigmoid function. Apart from recovering the inference accuracy, our RA-BNN after growing also shows significantly higher resistance to BFA. Our evaluation of the CIFAR-10 dataset shows that the proposed RA-BNN can improve the clean model accuracy by ~2-8 %, compared with a baseline BNN, while simultaneously improving the resistance to BFA by more than 125 x. Moreover, on ImageNet, with a sufficiently large (e.g., 5,000) amount of bit-flips, the baseline BNN accuracy drops to 4.3 % from 51.9 %, while our RA-BNN accuracy only drops to 37.1 % from 60.9 % (9 % clean accuracy improvement).
翻译:最近开发的对抗性重量攻击(a.k.a.bit-flip attack, a.k.a.bit-flip attack, a.k.a.a.blip-flip attack (BFA))在破坏深神经网络(DNNN)性能方面表现出了巨大的成功,其模型参数的精确度极小。为了防范这一威胁,我们建议RA-BNNN采用完整的二进制(即重量和激活)神经网络(BNNNN)网络(BNNNN),以显著提高DNNNNB模型的坚固度(定义为将精确度降低到低到低的位数,而我们的RA-BNNNN)模型的精确度则大大高于BFA的准确度。我们对CIFAR-10的精确度的评价(即没有攻击)为0.9x精确度。为了应对这一威胁,我们提出了一个新的、高效的两阶段网络增长的方法,称为早期Growth-GNNFA。它有选择性地增加了每个B的双层的频道的防层的防层的防波。除了恢复的精确度,而用光-NNFA-NNNB的精确度,同时提高了B。