End-to-end neural network models achieve improved performance on various automatic speech recognition (ASR) tasks. However, these models perform poorly on edge hardware due to large memory and computation requirements. While quantizing model weights and/or activations to low-precision can be a promising solution, previous research on quantizing ASR models is limited. In particular, the previous approaches use floating-point arithmetic during inference and thus they cannot fully exploit efficient integer processing units. Moreover, they require training/validation data during quantization, which may not be available due to security/privacy concerns. To address these limitations, we propose an integer-only, zero shot quantization scheme for ASR models. In particular, we generate synthetic data whose runtime statistics resemble the real data, and we use it to calibrate models during quantization. We apply our method to quantize QuartzNet, Jasper, and Conformer and show negligible WER change as compared to the full-precision baseline models, even without using any training data. Moreover, we achieve up to 2.35x speedup on a T4 GPU and 4x compression rate, with a modest WER degradation of <1% with INT8 quantization.
翻译:终端到终端神经网络模型在各种自动语音识别(ASR)任务上取得了更好的表现。然而,这些模型由于大量内存和计算要求,在边缘硬件上表现不佳。虽然对模型重量和/或低精度启动量进行量化,可能是一个大有希望的解决办法,但以往关于对ASR模型进行量化的研究是有限的。特别是,以往的方法在推断期间使用浮点算法,因此无法充分利用高效整数处理器。此外,它们要求在量化过程中提供培训/验证数据,而由于安全/隐私方面的考虑,这些数据可能无法提供。为了解决这些限制,我们提议为ASR模型采用仅使用整数、零射出的量化方法。特别是,我们生成了运行时间统计数据与真实数据相似的合成数据,我们在量化过程中使用这些数据来校准模型。我们采用的方法对QuartzNet、Jasper和Conforon和显示微小的WER变化,与全精度基线模型相比,即使没有使用任何培训数据,我们也可以在T4-TR-IQQPO和4x压缩率下达到2.35x速度。