The network architecture of end-to-end (E2E) automatic speech recognition (ASR) can be classified into several models, including connectionist temporal classification (CTC), recurrent neural network transducer (RNN-T), attention mechanism, and non-autoregressive mask-predict models. Since each of these network architectures has pros and cons, a typical use case is to switch these separate models depending on the application requirement, resulting in the increased overhead of maintaining all models. Several methods for integrating two of these complementary models to mitigate the overhead issue have been proposed; however, if we integrate more models, we will further benefit from these complementary models and realize broader applications with a single system. This paper proposes four-decoder joint modeling (4D) of CTC, attention, RNN-T, and mask-predict, which has the following three advantages: 1) The four decoders are jointly trained so that they can be easily switched depending on the application scenarios. 2) Joint training may bring model regularization and improve the model robustness thanks to their complementary properties. 3) Novel one-pass joint decoding methods using CTC, attention, and RNN-T further improves the performance. The experimental results showed that the proposed model consistently reduced the WER.
翻译:终端到终端自动语音识别(E2E)的网络架构(ASR)可分为若干模式,包括连接时间分类(CTC)、神经网络经常性传输器(RNN-T)、关注机制和非自动递化面罩预测模型。由于这些网络架构各有利弊,一个典型的使用案例是根据应用要求转换这些不同的模型,从而导致维护所有模型的间接费用增加。提出了几种整合这些互补模型以缓解间接费用问题的两种补充模型的方法;但是,如果我们纳入更多的模型,我们将进一步受益于这些互补模型,并在单一系统中实现更广泛的应用。本文建议四解码联合模型(4D)的CTC、注意、RNNNT和遮蔽预设,这有以下三个好处:(1) 4个解码器经过联合培训,以便根据应用情景很容易转换。(2) 联合培训可能带来模式的正规化,并改进模型的稳健性,因为其互补性质。(3) 诺维尔一面联合解码模型将进一步使用CTS、注意和RNNT的拟议实验结果不断改进。