The unified streaming and non-streaming two-pass (U2) end-to-end model for speech recognition has shown great performance in terms of streaming capability, accuracy, real-time factor (RTF), and latency. In this paper, we present U2++, an enhanced version of U2 to further improve the accuracy. The core idea of U2++ is to use the forward and the backward information of the labeling sequences at the same time at training to learn richer information, and combine the forward and backward prediction at decoding to give more accurate recognition results. We also proposed a new data augmentation method called SpecSub to help the U2++ model to be more accurate and robust. Our experiments show that, compared with U2, U2++ shows faster convergence at training, better robustness to the decoding method, as well as consistent 5\% - 8\% word error rate reduction gain over U2. On the experiment of AISHELL-1, we achieve a 4.63\% character error rate (CER) with a non-streaming setup and 5.05\% with a streaming setup with 320ms latency by U2++. To the best of our knowledge, 5.05\% is the best-published streaming result on the AISHELL-1 test set.
翻译:U2+ 的核心理念是同时使用标签序列的前向和后向信息以学习更丰富的信息,同时在培训过程中使用标签序列的前向和后向信息,在解码时将前向和后向预测结合起来,以得出更准确的识别结果。 我们还提议了一种称为SpecSub的新的数据增强方法,以帮助U2++模型更加准确和稳健。我们的实验显示,与U2、U2+++相比,U2++的强化版本在培训中更加趋近,对解码方法更加稳健。U2+的核心理念是同时使用标签序列的前向和后向信息,在培训中学习更丰富的信息,同时使用标签序列的前向和后向信息,在解码时将前向和后向预测结合起来,以更准确的识别结果。我们还提议了一个称为SpecSub的新的数据增强方法,以帮助U2+++模型更加准确和稳健。我们的实验显示,与U2+++相比,U2++相比,在培训中显示对解码方法的整合速度更快,比U2- 8字误差率增增益。在U2+ASISLIL1实验结果上,我们的最佳知识设置为320-05-05-05。