Modern non-autoregressive~(NAR) speech recognition systems aim to accelerate the inference speed; however, they suffer from performance degradation compared with autoregressive~(AR) models as well as the huge model size issue. We propose a novel knowledge transfer and distillation architecture that leverages knowledge from AR models to improve the NAR performance while reducing the model's size. Frame- and sequence-level objectives are well-designed for transfer learning. To further boost the performance of NAR, a beam search method on Mask-CTC is developed to enlarge the search space during the inference stage. Experiments show that the proposed NAR beam search relatively reduces CER by over 5% on AISHELL-1 benchmark with a tolerable real-time-factor~(RTF) increment. By knowledge transfer, the NAR student who has the same size as the AR teacher obtains relative CER reductions of 8/16% on AISHELL-1 dev/test sets, and over 25% relative WER reductions on LibriSpeech test-clean/other sets. Moreover, the ~9x smaller NAR models achieve ~25% relative CER/WER reductions on both AISHELL-1 and LibriSpeech benchmarks with the proposed knowledge transfer and distillation.
翻译:现代非潜移式~(NAR)语音识别系统旨在加速推导速度;然而,它们与自动递减~(AR)模型相比,还存在性能退化,以及巨大的模型规模问题。我们提出一个新的知识转移和蒸馏结构,利用AR模型的知识来改进NAR的性能,同时缩小模型的大小。框架和顺序层次的目标设计得当,用于转让学习。为了进一步提升NAR的性能,正在开发一个蒙面-CT的波段搜索方法,以便在推断阶段扩大搜索空间。实验表明,拟议的NAR的射线搜索相对减少AISHELL-1基准的CER5%以上,并具有可容忍的实时要素~(RTF)增量。通过知识转让,与AR教师具有相同规模的学生在AISHELL-1标准上获得相对的CER减少8/16%,在LibSpeech测试-清洁/其他系统上超过25%的WISARBER。此外,提议的ISAR-1模型和RA-V-RRR 相对基准都实现了ARE-S-RVS-RVER 和RIS-RIS-IS-PER 的较小的减少。