Transformer-based models have led to significant innovation in classical and practical subjects as varied as speech processing, natural language processing, and computer vision. On top of the Transformer, attention-based end-to-end automatic speech recognition (ASR) models have recently become popular. Specifically, non-autoregressive modeling, which boasts fast inference and performance comparable to conventional autoregressive methods, is an emerging research topic. In the context of natural language processing, the bidirectional encoder representations from Transformers (BERT) model has received widespread attention, partially due to its ability to infer contextualized word representations and to enable superior performance for downstream tasks while needing only simple fine-tuning. Motivated by the success, we intend to view speech recognition as a downstream task of BERT, thus an ASR system is expected to be deduced by performing fine-tuning. Consequently, to not only inherit the advantages of non-autoregressive ASR models but also enjoy the benefits of a pre-trained language model (e.g., BERT), we propose a non-autoregressive Transformer-based end-to-end ASR model based on BERT. We conduct a series of experiments on the AISHELL-1 dataset that demonstrate competitive or superior results for the model when compared to state-of-the-art ASR systems.
翻译:以变换器为基础的模型在语言处理、自然语言处理和计算机视觉等传统和实用科目上产生了重大创新。在变换器之外,基于关注的端对端自动语音识别(ASR)模型最近也变得很受欢迎。具体地说,非偏向型模型具有快速推论和性能可与常规自动反向方法相比的特征,是一个新出现的研究课题。在自然语言处理方面,来自变换器模型的双向编码显示得到了广泛的关注,部分是由于它能够推断背景化的文字表达方式,使下游任务能够有更高的性能,而只需要简单的微调。我们打算将语音识别视为BERT的下游任务,因此,预期通过进行微调来推导出一个ASR系统。因此,不仅继承了非偏向型ASR模型的优势,而且还享受了预先培训的语言模型(例如,BERT)的好处,我们提议在基于A-RISA的升级最终测试模型上,而不是基于WERA-RA-A-RA-A-A-ART-ART-S-ART-S-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-ART-S-S-S-S-S-S-ART-ART-ART-ART-S-S-S-S-S-S-AV-AV-AV-S-S-AV-S-S-AV-S-AV-AV-S-S-AV-S-S-AV-AV-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-AD-AD-AD-S-AD-AD-AD-AD-ANS-ANS-AD-AD-AD-AD-AD-ANS-AD-ANS-ANS-AD-AD-AD-AD-S-ANS-AD-AD-AD-AD-AD-AD-AD-AD-S-A