The streaming automatic speech recognition (ASR) models are more popular and suitable for voice-based applications. However, non-streaming models provide better performance as they look at the entire audio context. To leverage the benefits of the non-streaming model in streaming applications like voice search, it is commonly used in second pass re-scoring mode. The candidate hypothesis generated using steaming models is re-scored using a non-streaming model. In this work, we evaluate the non-streaming attention-based end-to-end ASR models on the Flipkart voice search task in both standalone and re-scoring modes. These models are based on Listen-Attend-Spell (LAS) encoder-decoder architecture. We experiment with different encoder variations based on LSTM, Transformer, and Conformer. We compare the latency requirements of these models along with their performance. Overall we show that the Transformer model offers acceptable WER with the lowest latency requirements. We report a relative WER improvement of around 16% with the second pass LAS re-scoring with latency overhead under 5ms. We also highlight the importance of CNN front-end with Transformer architecture to achieve comparable word error rates (WER). Moreover, we observe that in the second pass re-scoring mode all the encoders provide similar benefits whereas the difference in performance is prominent in standalone text generation mode.
翻译:流动自动语音识别模型( ASR) 流动自动语音识别模型( ASR) 更受欢迎,更适合语音应用程序。 然而, 非流式模型在查看整个音频背景时提供更好的性能。 要在语音搜索等流式应用程序中利用非流式模型的好处, 通常在第二传接重新校准模式中使用。 使用蒸汽模型产生的候选假设使用非流式模型重新定位。 在这项工作中, 我们评估了Flipkart 语音搜索模型上非流式关注端对端的ASR模型, 在独立和重新校对模式中, 这些模型提供更好的性能。 这些模型基于LSTM、 变换器和 Contrecured 模式的不流动模型。 我们比较了这些模型的惯用性要求及其性。 我们报告, 在独立和重新校对的语音搜索模型中, 大约16 %的WER改进了WER, 在SLAS- Speell的第二传换版本版本结构中, 也以SER 版本的升级的版本模式为高端级。