The fast proliferation of extreme-edge applications using Deep Learning (DL) based algorithms required dedicated hardware to satisfy extreme-edge applications' latency, throughput, and precision requirements. While inference is achievable in practical cases, online finetuning and adaptation of general DL models are still highly challenging. One of the key stumbling stones is the need for parallel floating-point operations, which are considered unaffordable on sub-100 mW extreme-edge SoCs. We tackle this problem with RedMulE (Reduced-precision matrix Multiplication Engine), a parametric low-power hardware accelerator for FP16 matrix multiplications - the main kernel of DL training and inference - conceived for tight integration within a cluster of tiny RISC-V cores based on the PULP (Parallel Ultra-Low-Power) architecture. In 22 nm technology, a 32-FMA RedMulE instance occupies just 0.07 mm^2 (14% of an 8-core RISC-V cluster) and achieves up to 666 MHz maximum operating frequency, for a throughput of 31.6 MAC/cycle (98.8% utilization). We reach a cluster-level power consumption of 43.5 mW and a full-cluster energy efficiency of 688 16-bit GFLOPS/W. Overall, RedMulE features up to 4.65x higher energy efficiency and 22x speedup over SW execution on 8 RISC-V cores.
翻译:利用深层学习(DL)的算法,极端尖端应用的快速扩散需要专用硬件,以满足极端尖端应用的延迟性、吞吐量和精确性要求。虽然在实际情况下可以作出推断,但一般DL模型的在线微调和调整仍然极具挑战性。一个关键的绊脚石是需要平行的浮点作业,据认为这在100毫瓦以下的极端尖端索尔结构上是负担不起的。我们用红模仪(减少精密矩阵乘数引擎)解决这个问题,一个用于FC16矩阵倍增的等量低功率硬件加速器,这是DL培训和推断的主要核心,目的是在基于POLP(Parallel Ultra-Low-Pow-Pration)结构的小型RIRC-V核心群集内进行紧密整合。在22毫米技术中,32FMA红模(减少精度矩阵乘数)只达到0.07毫米(占8核心RIRC-V组的14%),并且达到686-MHS-MAS最高操作频率,通过甚高的G-38-MFS-MFML 28的能量水平,达到总能量水平。