In today's era of smart cyber-physical systems, Deep Neural Networks (DNNs) have become ubiquitous due to their state-of-the-art performance in complex real-world applications. The high computational complexity of these networks, which translates to increased energy consumption, is the foremost obstacle towards deploying large DNNs in resource-constrained systems. Fixed-Point (FP) implementations achieved through post-training quantization are commonly used to curtail the energy consumption of these networks. However, the uniform quantization intervals in FP restrict the bit-width of data structures to large values due to the need to represent most of the numbers with sufficient resolution and avoid high quantization errors. In this paper, we leverage the key insight that (in most of the scenarios) DNN weights and activations are mostly concentrated near zero and only a few of them have large magnitudes. We propose CoNLoCNN, a framework to enable energy-efficient low-precision deep convolutional neural network inference by exploiting: (1) non-uniform quantization of weights enabling simplification of complex multiplication operations; and (2) correlation between activation values enabling partial compensation of quantization errors at low cost without any run-time overheads. To significantly benefit from non-uniform quantization, we also propose a novel data representation format, Encoded Low-Precision Binary Signed Digit, to compress the bit-width of weights while ensuring direct use of the encoded weight for processing using a novel multiply-and-accumulate (MAC) unit design.
翻译:在当今智能网络物理系统的时代,深神经网络(Deep Neal Networks)由于在复杂的现实世界应用程序中最先进的表现,已经变得无处不在。这些网络的计算复杂性高,导致能源消耗增加,这是在资源限制的系统中部署大型DNN的最大障碍。通过培训后量化实现的固定点(FP)执行通常被用来减少这些网络的能源消耗。然而,由于需要以足够的分辨率代表大多数数据,避免高度的量化错误,FP的统一量化间隔将数据结构的微宽度限制在大值上。在本文中,我们利用关键洞察力(在多数情况下)DNNN(F)的重量和激活大多集中在零点,只有极少数的大小。我们建议CONCNN,这是一个使节能低精度的低革命性神经网络能够实现节能的节能框架,通过下列方法推论:(1) 不统一重量的四分级化,而使核心的正值的正数处理能够大大简化,同时使复杂的多式设计成本补偿能够简化。