Today, almost all computer systems use IEEE-754 floating point to represent real numbers. Recently, posit was proposed as an alternative to IEEE-754 floating point as it has better accuracy and a larger dynamic range. The configurable nature of posit, with varying number of regime and exponent bits, has acted as a deterrent to its adoption. To overcome this shortcoming, we propose fixed-posit representation where the number of regime and exponent bits are fixed, and present the design of a fixed-posit multiplier. We evaluate the fixed-posit multiplier on error-resilient applications of AxBench and OpenBLAS benchmarks as well as neural networks. The proposed fixed-posit multiplier has 47%, 38.5%, 22% savings for power, area and delay respectively when compared to posit multipliers and up to 70%, 66%, 26% savings in power, area and delay respectively when compared to 32-bit IEEE-754 multiplier. These savings are accompanied with minimal output quality loss (1.2% average relative error) across OpenBLAS and AxBench workloads. Further, for neural networks like ResNet-18 on ImageNet we observe a negligible accuracy loss (0.12%) on using the fixed-posit multiplier.
翻译:今天,几乎所有计算机系统都使用 IEEE-754 浮动点来代表真实数字。 最近, 提出假设作为IEEE-754 浮动点的替代物, 因为它的精确性更高, 且具有更大的动态范围。 假设的可配置性质, 制度和表率数量各异, 从而阻碍了它的采用。 为了克服这一缺陷, 我们提议固定储量代表, 其制度和表情位的数量已经固定, 并提出固定储量乘数的设计。 我们评估了 AxBench 和 OpenBLAS 基准以及神经网络的误防应用的固定储量乘数乘数。 提议的固定储量乘数为权力、 面积和延迟率分别是47%、 38.5 %、 22%, 而与假设的乘数分别为70%、 66 %、 电力、 面积和延迟率的固定储量, 与32 位 IEEEEE- 754 乘数相比, 。 同时我们评价了OpenBLAS 和 AxBES 基准以及神经网络的误差差值应用。 (Ormalal- 18) 等固定图像网络的最小损失。