The Posit Number System was introduced in 2017 as a replacement for floating-point numbers. Since then, the community has explored its application in Neural Network related tasks and produced some unit designs which are still far from being competitive with their floating-point counterparts. This paper proposes a Posit Logarithm-Approximate Multiplication (PLAM) scheme to significantly reduce the complexity of posit multipliers, the most power-hungry units within Deep Neural Network architectures. When comparing with state-of-the-art posit multipliers, experiments show that the proposed technique reduces the area, power, and delay of hardware multipliers up to 72.86%, 81.79%, and 17.01%, respectively, without accuracy degradation.
翻译:posit 数字系统于2017年推出,以取代浮点数。 自此以后,社区探索了其在神经网络相关任务中的应用,并制作了一些单元设计,这些设计与其浮点对口单位相比仍远非具有竞争力。本文建议采用 Posit Logarithm-Appoblication 乘法(PLAM), 以大幅降低深神经网络架构中最强的乘数的复杂程度。 与最新现实乘数相比, 实验显示, 拟议的技术将硬件乘数的面积、 功率和延迟率分别降低到72.86%、 81.79% 和 17.01%, 但没有准确性退化。