Binary memristive crossbars have gained huge attention as an energy-efficient deep learning hardware accelerator. Nonetheless, they suffer from various noises due to the analog nature of the crossbars. To overcome such limitations, most previous works train weight parameters with noise data obtained from a crossbar. These methods are, however, ineffective because it is difficult to collect noise data in large-volume manufacturing environment where each crossbar has a large device/circuit level variation. Moreover, we argue that there is still room for improvement even though these methods somewhat improve accuracy. This paper explores a new perspective on mitigating crossbar noise in a more generalized way by manipulating input binary bit encoding rather than training the weight of networks with respect to noise data. We first mathematically show that the noise decreases as the number of binary bit encoding pulses increases when representing the same amount of information. In addition, we propose Gradient-based Bit Encoding Optimization (GBO) which optimizes a different number of pulses at each layer, based on our in-depth analysis that each layer has a different level of noise sensitivity. The proposed heterogeneous layer-wise bit encoding scheme achieves high noise robustness with low computational cost. Our experimental results on public benchmark datasets show that GBO improves the classification accuracy by ~5-40% in severe noise scenarios.
翻译:作为节能深深学习硬件加速器,中间介质横截面已获得极大关注。 然而,它们由于交叉截面的模拟性质而受到各种噪音的影响。 为了克服这些局限性,大多数先前的工作都用交叉截面获得的噪音数据来训练重量参数。 然而,这些方法之所以无效,是因为在大型制造环境中难以收集噪音数据,因为每个交叉截面都具有巨大的设备/电路水平差异。此外,我们争辩说,尽管这些方法在一定程度上提高了准确性,但仍有改进的余地。本文探讨了如何通过调控输入的双位编码而不是培训网络在噪音数据方面的权重来更普遍地减少跨巴噪音的新视角。我们首先用数学显示,噪音随着二位点编码脉冲数量在代表相同信息量时增加而减少。此外,我们建议采用基于梯基的Bit Encoting Oppimation(GBOBO) 优化每一层的不同脉冲数量,基于我们的深入分析,即每个层具有不同程度的成本严重敏感度,而不是根据不同程度的噪音敏感度,根据我们提议的不同程度的BEBBBBBB的精确度计算方法,从而实现高水平的精确度的精确度的精确度数据计算方法。