Transformer-based models have recently made significant achievements in the application of end-to-end (E2E) automatic speech recognition (ASR). It is possible to deploy the E2E ASR system on smart devices with the help of Transformer-based models. While these models still have the disadvantage of requiring a large number of model parameters. To overcome the drawback of universal Transformer models for the application of ASR on edge devices, we propose a solution that can reuse the block in Transformer models for the occasion of the small footprint ASR system, which meets the objective of accommodating resource limitations without compromising recognition accuracy. Specifically, we design a novel block-reusing strategy for speech Transformer (BRST) to enhance the effectiveness of parameters and propose an adapter module (ADM) that can produce a compact and adaptable model with only a few additional trainable parameters accompanying each reusing block. We conducted an experiment with the proposed method on the public AISHELL-1 corpus, and the results show that the proposed approach achieves the character error rate (CER) of 9.3%/6.63% with only 7.6M/8.3M parameters without and with the ADM, respectively. In addition, we also make a deeper analysis to show the effect of ADM in the general block-reusing method.
翻译:----
基于Transformer的模型最近在端到端(E2E)自动语音识别(ASR)应用中取得了重大进展。借助Transformer-based模型,可以在智能设备上部署E2E ASR系统。然而,这些模型仍然存在需要大量模型参数的缺点。为了克服通用Transformer模型在边缘设备上应用ASR的缺点,我们提出了一种可以在小型ASR系统的情况下重用Transformer模型中的块的解决方案,以达到在不影响识别准确度的情况下适应资源限制的目标。具体而言,我们设计了一种用于语音Transformer的新颖块重用策略(BRST)来提高参数的有效性,并提出一个适配器模块(ADM),该模块可以生成一个紧凑且可适应的模型,每个重用的块只需附带一些可训练的额外参数。我们在公共的AISHELL-1语料库上使用提出的方法进行了实验,结果显示,仅使用7.6M/8.3M参数,无ADM和有ADM两种情况下,提出的方法实现了字符错误率(CER)为9.3%/6.63%。此外,我们还做了更深入的分析,以展示ADM在通用块重用方法中的效果。