Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts. However, the accuracy gains are often based on specialized model designs using additional 32-bit components. Furthermore, almost all previous BNNs use 32-bit for feature maps and the shortcuts enclosing the corresponding binary convolution blocks, which helps to effectively maintain the accuracy, but is not friendly to hardware accelerators with limited memory, energy, and computing resources. Thus, we raise the following question: How can accuracy and energy consumption be balanced in a BNN network design? We extensively study this fundamental problem in this work and propose a novel BNN architecture without most commonly used 32-bit components: \textit{BoolNet}. Experimental results on ImageNet demonstrate that BoolNet can achieve 4.6x energy reduction coupled with 1.2\% higher accuracy than the commonly used BNN architecture Bi-RealNet. Code and trained models are available at: https://github.com/hpi-xnor/BoolNet.
翻译:Binary神经网络(BNNs)的近期工程在将BNNs的准确性差距缩小到32位对等点方面取得了可喜的进展,然而,准确性增益往往基于使用额外的32位元组件的专门模型设计。此外,几乎所有以前的BNNs都使用32位元的特写地图和包含相应的二进制变异区块的捷径,这有助于有效保持准确性,但对内存、能源和计算资源有限的硬件加速器则不友好。因此,我们提出以下问题:如何在BNN网络的设计中平衡准确性和能源消耗?我们广泛研究了这项工作中的这一基本问题,并提出了一个没有最常用32位元组件的新的BNNN结构:\ textit{BoolNet}。图像Net的实验结果表明,BolNet可以实现4.6x能源减少,同时比常用的BNNN 架构B-RealNet的B-RealNet更精准。代码和经过培训的模型见:https://github.com/hpi-xnor/BoolNet。