Deep neural network (DNN) accelerators received considerable attention in recent years due to the potential to save energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption, however, causes bit-level failures in the memory storing the quantized weights. Furthermore, DNN accelerators are vulnerable to adversarial attacks on voltage controllers or individual bits. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) or adversarial bit error training (AdvBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly. This leads not only to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators. In contrast to related work, our approach generalizes across operating voltages and accelerators and does not require hardware changes. Moreover, we present a novel adversarial bit error attack and are able to obtain robustness against both targeted and untargeted bit-level attacks. Without losing more than 0.8%/2% in test accuracy, we can reduce energy consumption on CIFAR10 by 20%/30% for 8/4-bit quantization. Allowing up to 320 adversarial bit errors, we reduce test error from above 90% (chance level) to 26.22%.
翻译:深神经网络(DNN)加速器近年来由于与主流硬件相比有可能节省能源而受到相当重视。但是,DNN加速器的低压操作可以进一步降低能源消耗。但是,DNN加速器的低压操作可以进一步降低能源消耗,导致存储四分制重量的内存中位失败。此外,DNN加速器很容易受到对电压控制器或个人位子的对抗性攻击。在本文中,我们表明,强定点四分制、减重误差以及随机点差训练(RandBET)或对称点差训练(AdvBET)或对称差错训练(AdvBET)相结合,可以进一步降低对随机或对称点差错的稳健性。此外,我们在低蒸气操作以及低精确度的电流操作中不仅节省了高能量,而且提高了DNNNNC加速器的安全性。与相关工作相比,我们的做法一般化了整个操作的调试算器和加速器,不需要硬性地改变。此外,我们在目标性攻击中BBBE/2级中可以降低目标性攻击的程度。