Deep neural networks virtually dominate the domain of most modern vision systems, providing high performance at a cost of increased computational complexity.Since for those systems it is often required to operate both in real-time and with minimal energy consumption (e.g., for wearable devices or autonomous vehicles, edge Internet of Things (IoT), sensor networks), various network optimisation techniques are used, e.g., quantisation, pruning, or dedicated lightweight architectures. Due to the logarithmic distribution of weights in neural network layers, a method providing high performance with significant reduction in computational precision (for 4-bit weights and less) is the Power-of-Two (PoT) quantisation (and therefore also with a logarithmic distribution). This method introduces additional possibilities of replacing the typical for neural networks Multiply and ACcumulate (MAC -- performing, e.g., convolution operations) units, with more energy-efficient Bitshift and ACcumulate (BAC). In this paper, we show that a hardware neural network accelerator with PoT weights implemented on the Zynq UltraScale + MPSoC ZCU104 SoC FPGA can be at least $1.4x$ more energy efficient than the uniform quantisation version. To further reduce the actual power requirement by omitting part of the computation for zero weights, we also propose a new pruning method adapted to logarithmic quantisation.
翻译:深神经网络几乎主宰着大多数现代视觉系统的领域,以计算复杂性增加的成本提供高性能,从而以更高的计算复杂性为代价提供高性能。对于这些系统来说,通常需要实时和低能量消耗(例如,可磨损装置或自主车辆,Things的边缘互联网(IoT),传感器网络)操作的,使用各种网络优化技术,例如,量化、修剪或专用轻量结构。由于神经网络层重量的对数分布,一种能显著降低计算精度(4比位重量或更少)的方法,是2级(POT)裁量法(Powyn-2)(PoT)裁量法(因此也有对数分布),因此,这种方法增加了替换神经网络典型的优化技术(MAC -- -- 运行,例如,变压操作)单位,以更节能的位化和累积(BAC)为单位。在本文中,我们还可以显示,一个硬性网络的精度精确度精确度精确度精确度精确度精确度(POT-FC)的精度比重进一步降低硬度。