The recent emergence of joint CTC-Attention model shows significant improvement in automatic speech recognition (ASR). The improvement largely lies in the modeling of linguistic information by decoder. The decoder joint-optimized with an acoustic encoder renders the language model from ground-truth sequences in an auto-regressive manner during training. However, the training corpus of the decoder is limited to the speech transcriptions, which is far less than the corpus needed to train an acceptable language model. This leads to poor robustness of decoder. To alleviate this problem, we propose linguistic-enhanced transformer, which introduces refined CTC information to decoder during training process, so that the decoder can be more robust. Our experiments on AISHELL-1 speech corpus show that the character error rate (CER) is relatively reduced by up to 7%. We also find that in joint CTC-Attention ASR model, decoder is more sensitive to linguistic information than acoustic information.
翻译:最近出现的CTC-Avention联合模型表明,自动语音识别(ASR)方面有了显著改进。改进主要在于由解码器对语言信息进行建模。解码器与音调编码器共同优化,使语言模型在培训期间以自动递增的方式从地面真实序列中产生,但解码器的训练材料限于语音抄录,远远少于培训可接受的语言模型所需的材料。这导致解码器的强度差。为缓解这一问题,我们建议了语言强化变压器,该变压器在培训过程中将改良的CTC信息引入解码器,从而使解码器更加坚固。我们在ASHELL-1语音材料上的实验显示,字符误差率(CER)相对下降7%。我们还发现,在CT-AD-Atency ASR联合模型中,解码器对语言信息比语音信息更为敏感。