Posit has been a promising alternative to the IEEE-754 floating point format for deep learning applications due to its better trade-off between dynamic range and accuracy. However, hardware implementation of posit arithmetic requires further exploration, especially for the dot-product operations dominated in deep neural networks (DNNs). It has been implemented by either the combination of multipliers and an adder tree or cascaded fused multiply-add units, leading to poor computational efficiency and excessive hardware overhead. To address this issue, we propose an open-source posit dot-product unit, namely PDPU, that facilitates resource-efficient and high-throughput dot-product hardware implementation. PDPU not only features the fused and mixed-precision architecture that eliminates redundant latency and hardware resources, but also has a fine-grained 6-stage pipeline, improving computational efficiency. A configurable PDPU generator is further developed to meet the diverse needs of various DNNs for computational accuracy. Experimental results evaluated under the 28nm CMOS process show that PDPU reduces area, latency, and power by up to 43%, 64%, and 70%, respectively, compared to the existing implementations. Hence, PDPU has great potential as the computing core of posit-based accelerators for deep learning applications.
翻译:与IEEE-754浮动点格式相比,对于深层学习应用而言,这是一种很有希望的替代方案,因为它在动态范围与准确性之间进行了更好的权衡。然而,应用假设算术硬件需要进一步探索,特别是对于在深神经网络(DNNs)中占主导地位的点产品操作,它通过乘数和加层树或连锁的增殖装置组合实施,导致计算效率低下和硬件管理负担过重。为了解决这个问题,我们提议建立一个开放源代码点产品单元,即PDPU,促进资源效率和高通量点产品硬件的实施。PDPU不仅具有消除冗余和硬件资源的结合和混合精度结构,而且还拥有精细的六级管道,提高计算效率。一个可配置的PDPU发电机进一步开发,以满足各种DNNP对计算准确性的不同需求。根据28n CMOS进程评估的实验结果显示,PDPU的面积、延展率和能量将PDPU的面积、延展率和能量分别降低至43%的深度学习程度。