Binary Neural Networks (BNNs) are showing tremendous success on realistic image classification tasks. Notably, their accuracy is similar to the state-of-the-art accuracy obtained by full-precision models tailored to edge devices. In this regard, BNNs are very amenable to edge devices since they employ 1-bit to store the inputs and weights, and thus, their storage requirements are low. Also, BNNs computations are mainly done using xnor and pop-counts operations which are implemented very efficiently using simple hardware structures. Nonetheless, supporting BNNs efficiently on mobile CPUs is far from trivial since their benefits are hindered by frequent memory accesses to load weights and inputs. In BNNs, a weight or an input is stored using one bit, and aiming to increase storage and computation efficiency, several of them are packed together as a sequence of bits. In this work, we observe that the number of unique sequences representing a set of weights is typically low. Also, we have seen that during the evaluation of a BNN layer, a small group of unique sequences is employed more frequently than others. Accordingly, we propose exploiting this observation by using Huffman Encoding to encode the bit sequences and then using an indirection table to decode them during the BNN evaluation. Also, we propose a clustering scheme to identify the most common sequences of bits and replace the less common ones with some similar common sequences. Hence, we decrease the storage requirements and memory accesses since common sequences are encoded with fewer bits. We extend a mobile CPU by adding a small hardware structure that can efficiently cache and decode the compressed sequence of bits. We evaluate our scheme using the ReAacNet model with the Imagenet dataset. Our experimental results show that our technique can reduce memory requirement by 1.32x and improve performance by 1.35x.
翻译:Bin Neural 网络( BNNS) 在现实图像分类任务上表现出巨大的成功。 值得注意的是, 其准确性与根据边缘设备定制的全精度模型所获得的最新精度相似。 在这方面, BNNS非常适合边缘设备, 因为它们使用1比特存储投入和重量, 因此它们的存储要求很低。 此外, BNNS 计算主要使用xnor和流行计算操作进行, 使用简单的硬件结构非常高效地执行。 然而, 高效支持移动的CPU远不是微不足道的, 因为经常的内存访问以装载重量和输入量获得的好处。 在 BNNNS 中, 一个或一个输入的重量或输入用一位来存储和计算效率, 目的是增加一些存储量和重量的顺序。 在这项工作中, 我们观察到代表一组重量的独特序列的数量通常较低 。 我们还看到, 在对一个BNNNP 级的评估中, 一个小的单个序列被比其他的更频繁地使用了。 因此, 我们建议利用一个普通的存储的存储序列 系统来显示一个比B级的顺序 。 。 我们的运行一个普通的运行的运行的系统可以显示一个比Bx 。 我们的运行的运行的运行的顺序 。 我们的运行可以显示一个比 BNNFC 。 我们的运行的运行的运行的运行的运行的运行的顺序可以更小的运行到一个比 Bx 。