Recently, accelerators for extremely quantized deep neural network (DNN) inference with operand widths as low as 1-bit have gained popularity due to their ability to largely cut down energy cost per inference. In this paper, a flexible SoC with mixed-precision support is presented. Contrary to the current trend of fixed-datapath accelerators, this architecture makes use of a flexible datapath based on a Transport-Triggered Architecture (TTA). The architecture is fully programmable using C. The accelerator has a peak energy efficiency of 35/67/405 fJ/op (binary, ternary, and 8-bit precision) and a throughput of 614/307/77 GOPS, which is unprecedented for a programmable architecture.
翻译:最近,极量深度神经网络(DNN)的加速器,其运行宽度低至1比特,由于能够大幅降低每次推断的能源成本,其加速器越来越受欢迎。本文介绍了具有混合精度支持的灵活 SoC,与目前固定数据路径加速器的趋势相反,这一结构利用基于运输-交错结构(TTA)的灵活数据路径。使用C,该结构完全可编程。 加速器的能效峰值为35,1,405fJ/op(二元、双元和8比特精度)和614,307,77 GOPS,这是可用于规划的建筑史无前例的。