End-to-end (E2E) models have shown to outperform state-of-the-art conventional models for streaming speech recognition [1] across many dimensions, including quality (as measured by word error rate (WER)) and endpointer latency [2]. However, the model still tends to delay the predictions towards the end and thus has much higher partial latency compared to a conventional ASR model. To address this issue, we look at encouraging the E2E model to emit words early, through an algorithm called FastEmit [3]. Naturally, improving on latency results in a quality degradation. To address this, we explore replacing the LSTM layers in the encoder of our E2E model with Conformer layers [4], which has shown good improvements for ASR. Secondly, we also explore running a 2nd-pass beam search to improve quality. In order to ensure the 2nd-pass completes quickly, we explore non-causal Conformer layers that feed into the same 1st-pass RNN-T decoder, an algorithm we called Cascaded Encoders. Overall, we find that the Conformer RNN-T with Cascaded Encoders offers a better quality and latency tradeoff for streaming ASR.
翻译:端到端模式(E2E)已经显示,超越了流出语音识别[1]的许多层面,包括质量(用字差率衡量)和端点延缓度[2]等质量(用字差率衡量)的先进常规模型。然而,该模型仍然倾向于将预测延迟到终端,因此与常规的ASR模型相比,部分延缓性要高得多。为解决这一问题,我们考虑鼓励E2E模式通过一个称为FastEmit的算法(3)及早发布单词。自然,改进延缓性导致质量退化。为此,我们探索用Consecter 层取代E2E模型编码器的LSTM层[4],这显示了ASR的良好改进。第二,我们还探索进行第二通路搜索,以提高质量。为了确保第二通路快速完成,我们探索非通电路连接层,以同样的1号RNNE-T解码为源,我们称之为Cassad Encoders,我们用Cast-Centrancer 提供更高质量的ANS。