Because of predicting all the target tokens in parallel, the non-autoregressive models greatly improve the decoding efficiency of speech recognition compared with traditional autoregressive models. In this work, we present dynamic alignment Mask CTC, introducing two methods: (1) Aligned Cross Entropy (AXE), finding the monotonic alignment that minimizes the cross-entropy loss through dynamic programming, (2) Dynamic Rectification, creating new training samples by replacing some masks with model predicted tokens. The AXE ignores the absolute position alignment between prediction and ground truth sentence and focuses on tokens matching in relative order. The dynamic rectification method makes the model capable of simulating the non-mask but possible wrong tokens, even if they have high confidence. Our experiments on WSJ dataset demonstrated that not only AXE loss but also the rectification method could improve the WER performance of Mask CTC.
翻译:由于同时预测所有目标符号,非递减模型大大提高了与传统的自动递减模型相比,语音识别的解码效率。在这项工作中,我们展示了动态校正Mask CT,引入了两种方法:(1) 统一跨星体(AXE),通过动态编程找到将交叉热带损失最小化的单声调匹配,(2) 动态校正,通过用模型预测符号替换一些面罩来创建新的培训样本。AXE忽略了预测和地面真理句之间的绝对位置对齐,并侧重于相对顺序匹配的符号。动态校正方法使得模型能够模拟非面体但可能错误的符号,即使它们具有高度信心。我们在WSJ数据集上的实验表明,不仅AXE损失,而且校正方法可以改善WER MAsk CT的性能。</s>