Deep neural network (DNN) accelerators received considerable attention in recent years due to the potential to save energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights. Furthermore, DNN accelerators have been shown to be vulnerable to adversarial attacks on voltage controllers or individual bits. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) or adversarial bit error training (AdvBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly. This leads not only to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators. Our approach generalizes across operating voltages and accelerators, as demonstrated on bit errors from profiled SRAM arrays, and achieves robustness against both targeted and untargeted bit-level attacks. Without losing more than 0.8%/2% in test accuracy, we can reduce energy consumption on CIFAR10 by 20%/30% for 8/4-bit quantization using RandBET. Allowing up to 320 adversarial bit errors, AdvBET reduces test error from above 90% (chance level) to 26.22% on CIFAR10.
翻译:深神经网络加速器近年来由于与主流硬件相比有可能节省能源而受到相当大的关注。 DNN 低压加速器的低压操作允许进一步减少能源消耗,但导致存储四角化 DNN 重量的记忆中位数级错误。此外, DNN 加速器被证明很容易受到对电压控制器或个人位数的对抗性攻击。在本文中,我们展示了强势固定点偏差、权重剪动以及随机位误差训练(RandBET)或对抗性位误差训练(AdvBET)的结合,使得DNNN(B) 加速器的低压级存储器失败。DNNN(DN) 加速器不仅容易受到对电压控制器或个人位控制器或个人位数的低节能节约,而且提高了 DNNENC 加速器的安全性。我们在运行中将运行的电压和加速器的全局性差错率从操作中概括到随机位位位点差(RED) 22 标准级标准级(B) 20比比级的精确度为B 标准 20, 标准测试中可以降低20比级的 RRA/ 目标级的 RRADRDRDRDRB 20 标准测试,在20 测试中可以更稳度。