Binary neural networks (BNNs) that use 1-bit weights and activations have garnered interest as extreme quantization provides low power dissipation. By implementing BNNs as computing-in-memory (CIM), which computes multiplication and accumulations on memory arrays in an analog fashion, namely analog CIM, we can further improve the energy efficiency to process neural networks. However, analog CIMs suffer from the potential problem that process variation degrades the accuracy of BNNs. Our Monte-Carlo simulations show that in an SRAM-based analog CIM of VGG-9, the classification accuracy of CIFAR-10 is degraded even below 20% under process variations of 65nm CMOS. To overcome this problem, we present a variation-aware BNN framework. The proposed framework is developed for SRAM-based BNN CIMs since SRAM is most widely used as on-chip memory, however easily extensible to BNN CIMs based on other memories. Our extensive experimental results show that under process variation of 65nm CMOS, our framework significantly improves the CIFAR-10 accuracies of SRAM-based BNN CIMs, from 10% and 10.1% to 87.76% and 77.74% for VGG-9 and RESNET-18 respectively.
翻译:使用 1 位权重和激活的双线神经网络(BNNs ) 使用 1 位权重和激活的模拟神经网络(BNNs ) 已经引起了人们的兴趣,因为极端的四分点化提供了低耗电量。通过将BNns作为计算中存储器(CIM),以模拟方式计算存储阵列的倍增和累积,即模拟 CIM,我们可以进一步提高处理神经网络的能源效率。然而,模拟CIMs则面临一个潜在的问题,即流程变异会降低BNNS的准确性。我们的Mon-Carlo模拟显示,在基于SRAM的类似VGG-9 CIMM-9的模拟 CIM中,CFAR-10的分类准确性甚至低于20%,在65nm CMOS的流程变异(CMS)下,我们的CFAR-10-18框架大大改进了基于SRMM% 和10- 10RAFER 框架。