Systolic array-based deep neural network (DNN) accelerators have recently gained prominence for their low computational cost. However, their high energy consumption poses a bottleneck to their deployment in energy-constrained devices. To address this problem, approximate computing can be employed at the cost of some tolerable accuracy loss. However, such small accuracy variations may increase the sensitivity of DNNs towards undesired subtle disturbances, such as permanent faults. The impact of permanent faults in accurate DNNs has been thoroughly investigated in the literature. Conversely, the impact of permanent faults in approximate DNN accelerators (AxDNNs) is yet under-explored. The impact of such faults may vary with the fault bit positions, activation functions and approximation errors in AxDNN layers. Such dynamacity poses a considerable challenge to exploring the trade-off between their energy efficiency and fault resilience in AxDNNs. Towards this, we present an extensive layer-wise and bit-wise fault resilience and energy analysis of different AxDNNs, using the state-of-the-art Evoapprox8b signed multipliers. In particular, we vary the stuck-at-0, stuck-at-1 fault-bit positions, and activation functions to study their impact using the most widely used MNIST and Fashion-MNIST datasets. Our quantitative analysis shows that the permanent faults exacerbate the accuracy loss in AxDNNs when compared to the accurate DNN accelerators. For instance, a permanent fault in AxDNNs can lead up to 66\% accuracy loss, whereas the same faulty bit can lead to only 9\% accuracy loss in an accurate DNN accelerator. Our results demonstrate that the fault resilience in AxDNNs is orthogonal to the energy efficiency.
翻译:以轨迹为基础的深度神经网络(DNN)加速器(DNN)最近因其计算成本低而变得显眼。然而,它们的高能源消耗量对其在能源限制装置中的部署构成瓶颈。为了解决这一问题,可以以某些可容忍的准确性损失为代价使用近似计算。然而,这种小的精确性变化可能会提高DNN对不理想的微妙干扰的敏感性,例如永久性故障。文献中已经对准确的 DNN(DN)中永久错误的影响进行了彻底调查。相反,在大约的 DNN(AxD)加速器(AxDNN)中,永久错误的准确性影响仍然不足。为了解决这个问题,这些错误的影响可能与AxDNN(AxD)层的错误、激活功能和近似差差差差差(例如永久错)。