In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model. Our model adopts the hybrid CTC/attention architecture, in which the conformer layers in the encoder are modified. We propose a dynamic chunk-based attention strategy to allow arbitrary right context length. At inference time, the CTC decoder generates n-best hypotheses in a streaming way. The inference latency could be easily controlled by only changing the chunk size. The CTC hypotheses are then rescored by the attention decoder to get the final result. This efficient rescoring process causes very little sentence-level latency. Our experiments on the open 170-hour AISHELL-1 dataset show that, the proposed method can unify the streaming and non-streaming model simply and efficiently. On the AISHELL-1 test set, our unified model achieves 5.60% relative character error rate (CER) reduction in non-streaming ASR compared to a standard non-streaming transformer. The same model achieves 5.42% CER with 640ms latency in a streaming ASR system.
翻译:在本文中,我们展示了一种新型双向方法,在单一模型中统一流态和非流式端到端的语音识别(E2E),以统一流式和非流式端到端的语音识别。我们的模型采用了混合的CTC/注意结构,在这种结构中,对编码器中的校正层进行了修改。我们提出了一个动态块式关注战略,允许任意的上下文长度。在推论时间,CTC解码器生成了最佳假设,以流方式生成了正最佳的假说。只有改变块大小,才能很容易地控制推导延度。然后,CCT假设被关注分解器重新定位,以获得最终结果。这种高效的重新定位过程导致极小的句层。我们在开放的 170小时 ASHELL-1 数据集上的实验表明,拟议的方法可以简单而有效地统一流式和非流模式。在ASHELL-1测试集中,我们的统一模型在非流的ASR中比标准的无流式变压器减少5.60%相对性差率(CER)。同样的模型实现了A42%的流式系统。