Streaming automatic speech recognition (ASR) aims to emit each hypothesized word as quickly and accurately as possible. However, emitting fast without degrading quality, as measured by word error rate (WER), is highly challenging. Existing approaches including Early and Late Penalties and Constrained Alignments penalize emission delay by manipulating per-token or per-frame probability prediction in sequence transducer models. While being successful in reducing delay, these approaches suffer from significant accuracy regression and also require additional word alignment information from an existing model. In this work, we propose a sequence-level emission regularization method, named FastEmit, that applies latency regularization directly on per-sequence probability in training transducer models, and does not require any alignment. We demonstrate that FastEmit is more suitable to the sequence-level optimization of transducer models for streaming ASR by applying it on various end-to-end streaming ASR networks including RNN-Transducer, Transformer-Transducer, ConvNet-Transducer and Conformer-Transducer. We achieve 150-300 ms latency reduction with significantly better accuracy over previous techniques on a Voice Search test set. FastEmit also improves streaming ASR accuracy from 4.4%/8.9% to 3.1%/7.5% WER, meanwhile reduces 90th percentile latency from 210 ms to only 30 ms on LibriSpeech.
翻译:发送自动语音识别(ASR)的目的是尽可能快速和准确地发布每个虚伪的单词。 然而,以字差错率(WER)测量的快速且不降低质量的快速排放,是极具挑战性的。 现有的方法,包括早晚处罚和约束性调整等,通过在序列传输器模型中操纵对单吨或每框架概率的预测来惩罚排放延迟。 这些方法在成功地减少延迟的同时,还受到显著的准确性回归的影响,并要求从现有模型中获得更多的单词对齐信息。 在这项工作中,我们提出了一个序列级排放规范化方法,名为Fast Emit,在培训导师模型中直接对单序列概率进行定置,且不要求任何对齐。 我们证明,快速 Emit 更适合导器模型的序列级优化,在包括 RNNE- Torninger、变换器-变换器、CONNet- Transfer and Contraction-traductioner 方法中,我们从每序列概率概率调整150-300 mass ER-Rass Rass sal 30 smass smass smass smass smass smass smass smass arass setty settillation 。