In this paper, we propose an open source, production first, and production ready speech recognition toolkit called WeNet in which a new two-pass approach is implemented to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model. The main motivation of WeNet is to close the gap between the research and the production of E2E speechrecognition models. WeNet provides an efficient way to ship ASR applications in several real-world scenarios, which is the main difference and advantage to other open source E2E speech recognition toolkits. In our toolkit, a new two-pass method is implemented. Our method propose a dynamic chunk-based attention strategy of the the transformer layers to allow arbitrary right context length modifies in hybrid CTC/attention architecture. The inference latency could be easily controlled by only changing the chunk size. The CTC hypotheses are then rescored by the attention decoder to get the final result. Our experiments on the AISHELL-1 dataset using WeNet show that, our model achieves 5.03\% relative character error rate (CER) reduction in non-streaming ASR compared to a standard non-streaming transformer. After model quantification, our model perform reasonable RTF and latency.
翻译:在本文中,我们提出一个开放源码、制作和制作即成语音识别工具包,称为WeNet,在其中采用新的双向办法,在单一模型中统一流式和非流式端到端语音识别,WeNet的主要动机是缩小研究和制作E2E语音识别模型之间的差距。我们网络为将ASR应用应用于若干现实世界情景提供了一个有效的途径,这是其他开源E2E语音识别工具包的主要区别和优势。在我们的工具中,采用了新的双向方法。我们的方法提出了变异器层动态块式关注战略,允许任意修改混合式CTC/注意结构的右上下文长度。推论延度可以很容易地通过改变块大小来控制。然后,CTC的假设被关注分解器重新定位,以获得最终结果。我们在AISELL-1号语音识别工具包中使用WNet的实验显示,我们的模型实现了5.03-相对性格错误率(CER)在非流式变现式ASR模型中进行非流式变换。