Recurrent neural networks such as Long Short-Term Memories (LSTMs) learn temporal dependencies by keeping an internal state, making them ideal for time-series problems such as speech recognition. However, the output-to-input feedback creates distinctive memory bandwidth and scalability challenges in designing accelerators for RNNs. We present Muntaniala, an RNN accelerator architecture for LSTM inference with a silicon-measured energy-efficiency of 3.25$TOP/s/W$ and performance of 30.53$GOP/s$ in UMC 65 $nm$ technology. The scalable design of Muntaniala allows running large RNN models by combining multiple tiles in a systolic array. We keep all parameters stationary on every die in the array, drastically reducing the I/O communication to only loading new features and sharing partial results with other dies. For quantifying the overall system power, including I/O power, we built Vau da Muntanialas, to the best of our knowledge, the first demonstration of a systolic multi-chip-on-PCB array of RNN accelerator. Our multi-die prototype performs LSTM inference with 192 hidden states in 330$\mu s$ with a total system power of 9.0$mW$ at 10$MHz$ consuming 2.95$\mu J$. Targeting the 8/16-bit quantization implemented in Muntaniala, we show a phoneme error rate (PER) drop of approximately 3% with respect to floating-point (FP) on a 3L-384NH-123NI LSTM network on the TIMIT dataset.
翻译:常规神经网络,如长期短期记忆(LSTMs),通过保持内部状态来学习时间依赖性,使其适用于语音识别等时间序列问题。然而,输出到投入反馈在设计 RNN 加速器时产生了独特的内存带宽度和可缩缩缩化挑战。我们向Muntanila展示了LSTM推力的 RNNN 加速器结构,其测算能量效率为3.25美元 TOP/s/W$和30.53GOP/s美元,在UMC 65 美元的技术中表现为30.53GOP/s。Muntanala的可缩放设计通过将多个调频阵列中的多色带组合组合来运行大型 RNNNS模型。我们将所有参数都固定在阵列中,将I/O通信大幅降低到只装装新功能并分享部分结果。为了量化整个系统电量,包括 I/O电力,我们建造了Vau da Muntianilas,以我们最了解的Fal $ 65, 在S-ral lical-restal Restal Rexal rial MICal deal deal 3Mexal 3Metalxal.