Recently, RNN-Transducers have achieved remarkable results on various automatic speech recognition tasks. However, lattice-free sequence discriminative training methods, which obtain superior performance in hybrid modes, are rarely investigated in RNN-Transducers. In this work, we propose three lattice-free training objectives, namely lattice-free maximum mutual information, lattice-free segment-level minimum Bayes risk, and lattice-free minimum Bayes risk, which are used for the final posterior output of the phoneme-based neural transducer with a limited context dependency. Compared to criteria using N-best lists, lattice-free methods eliminate the decoding step for hypotheses generation during training, which leads to more efficient training. Experimental results show that lattice-free methods gain up to 6.5% relative improvement in word error rate compared to a sequence-level cross-entropy trained model. Compared to the N-best-list based minimum Bayes risk objectives, lattice-free methods gain 40% - 70% relative training time speedup with a small degradation in performance.
翻译:最近,RNN- Transporters在各种自动语音识别任务上取得了显著成果。然而,在RNN-Transtors中,很少调查以混合模式取得优异性能的无边顺序歧视性培训方法。在这项工作中,我们提议了三种无边式培训目标,即无边式最大相互信息、无边式区段最低海湾风险和无边式最低海湾风险,这些风险用于电话基神经传输器最终的后端输出,其环境依赖性有限。与使用最佳名单的标准相比,无边式方法消除了在培训期间生成虚伪的解码步骤,从而导致更有效的培训。实验结果表明,无边式方法与顺序级跨编程培训模式相比,在字数错误率上取得了6.5%的相对改善。与基于N-最佳列表的最低巴伊斯风险目标相比,无边式方法获得40%-70%的相对培训速度,其性能略有下降。