In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models. We introduce a tapered fixed-point quantization algorithm that matches the numerical format's dynamic range and distribution to that of the deep neural network model's parameter distribution at each layer. An accelerator architecture for the tapered fixed-point with TENT framework is proposed. Results show that the accuracy on classification tasks improves up to ~31 % with an energy overhead of ~17-30 % as compared to fixed-point, for ConvNet and ResNet-18 models.
翻译:在这一研究中,我们提议一个新的低精度框架,即TENT,以利用在TinyML模型中被压缩的固定点数字格式的好处。我们引入了与数字格式动态范围和分布相匹配的固定点量化算法,该算法与数字格式的动态范围和每一层深神经网络模型参数分布相匹配。提出了与TENT框架一起被压缩的固定点的加速结构。结果显示,对ConvNet和ResNet-18模型而言,分类任务的准确性提高到~31 %, 与固定点相比,能源间接费用为~17-30 % 。