Hamming weights of sparse and long binary vectors are important modules in many scientific applications, particularly in spiking neural networks that are of our interest. To improve both area and latency of their FPGA implementations, we propose a method inspired from synaptic transmission failure for exploiting FPGA lookup tables to compress long input vectors. To evaluate the effectiveness of this approach, we count the number of `1's of the compressed vector using a simple linear adder. We classify the compressors into shallow ones with up to two levels of lookup tables and deep ones with more than two levels. The architecture generated by this approach shows up to 82% and 35% reductions for different configurations of shallow compressors in area and latency respectively. Moreover, our simulation results show that calculating the Hamming weight of a 1024-bit vector of a spiking neural network by the use of only deep compressors preserves the chaotic behavior of the network while slightly impacts on the learning performance.
翻译:稀有的和长的二进制矢量的耗载重量是许多科学应用中的重要模块,特别是在我们感兴趣的神经网络中。为了改善它们的 FPGA 执行的面积和纬度,我们建议了一种方法,因为利用 FPGA 外观表来压缩长输入矢量的合成传输失败。为了评估这个方法的有效性,我们用简单的线性添加器来计算压缩矢量的“ 1” 数量。我们将压缩矢量分为浅层,最多分为两个层次的外观表和两个以上层次的深层。这个方法产生的结构显示,在区域和延缓度中,对浅压缩器的不同配置分别削减了82%和35%。此外,我们的模拟结果显示,仅仅使用深压缩器来计算螺旋网络1024位矢量的重量,可以保持网络的混乱行为,同时对学习性能略有影响。