Approximate computing is known for its effectiveness in improvising the energy efficiency of deep neural network (DNN) accelerators at the cost of slight accuracy loss. Very recently, the inexact nature of approximate components, such as approximate multipliers have also been reported successful in defending adversarial attacks on DNNs models. Since the approximation errors traverse through the DNN layers as masked or unmasked, this raises a key research question-can approximate computing always offer a defense against adversarial attacks in DNNs, i.e., are they universally defensive? Towards this, we present an extensive adversarial robustness analysis of different approximate DNN accelerators (AxDNNs) using the state-of-the-art approximate multipliers. In particular, we evaluate the impact of ten adversarial attacks on different AxDNNs using the MNIST and CIFAR-10 datasets. Our results demonstrate that adversarial attacks on AxDNNs can cause 53% accuracy loss whereas the same attack may lead to almost no accuracy loss (as low as 0.06%) in the accurate DNN. Thus, approximate computing cannot be referred to as a universal defense strategy against adversarial attacks.
翻译:近似计算是众所周知的,因为它在以轻微精度损失为代价的深神经网络加速器(DNNN)节能即时节能方面的有效性,以轻微精度损失为代价。最近,近似部件(如近似倍数)的不精确性也据报成功地捍卫了对DNN模型的对抗性攻击。由于近似误差以蒙面或未涂面的方式穿透DNNN层,这就产生了一种关键的研究问题-近似计算,总能为DNN的对抗性攻击提供防御,即它们是否是普遍防御性的攻击?为此,我们用最先进的近似近似近似近似倍数的倍增效应器(AxDNN)进行了广泛的对抗性稳健性分析。特别是,我们用MNIST和CIFAR-10数据集评估了对不同AxDNN的十次对抗性攻击的影响。我们的研究结果表明,对AxDNNN的对抗性攻击可能造成53%的精确性损失,而同一攻击在精确的对抗性攻击中几乎没有造成精确性损失(低为0.06%)。因此,通用计算不能称为防御性防御性战略。