End-to-end neural network models achieve improved performance on various automatic speech recognition (ASR) tasks. However, these models perform poorly on edge hardware due to large memory and computation requirements. While quantizing model weights and/or activations to low-precision can be a promising solution, previous research on quantizing ASR models is limited. In particular, the previous approaches use floating-point arithmetic during inference and thus they cannot fully exploit efficient integer processing units. Moreover, they require training and/or validation data during quantization, which may not be available due to security or privacy concerns. To address these limitations, we propose an integer-only, zero-shot quantization scheme for ASR models. In particular, we generate synthetic data whose runtime statistics resemble the real data, and we use it to calibrate models during quantization. We apply our method to quantize QuartzNet, Jasper, and Conformer and show negligible WER degradation as compared to the full-precision baseline models, even without using any data. Moreover, we achieve up to 2.35x speedup on a T4 GPU and 4x compression rate, with a modest WER degradation of <1% with INT8 quantization.
翻译:终端到终端神经网络模型在各种自动语音识别(ASR)任务上取得了更好的业绩。然而,这些模型由于大量内存和计算要求,在边缘硬件上表现不佳。虽然对模型重量和/或低精度启动量进行量化可能是有希望的解决办法,但以往关于对ASR模型进行量化的研究是有限的。特别是,以往的方法在推断期间使用浮点算法,因此无法充分利用高效整数处理器。此外,在量化过程中,这些模型需要培训和/或验证数据,而由于安全或隐私考虑而可能得不到这些数据。为了解决这些限制,我们建议为ASR模型制定一个只使用整数、零点的量化方案。特别是,我们生成的合成数据,其运行时间统计数据与实际数据相似,我们在量化过程中使用这些数据来校准模型。我们采用的方法对QuartzNet、Jasper和Conforect进行量化,并显示与全精度基线模型相比,即使不使用任何数据,也微不足道的WER降解。此外,我们用微量的四氧化四氧化四氟的四氟化四氟的四氟化四氟化四氟化四氟化四氟化四氟化四氟制率达到2.35x。