The transducer architecture is becoming increasingly popular in the field of speech recognition, because it is naturally streaming as well as high in accuracy. One of the drawbacks of transducer is that it is difficult to decode in a fast and parallel way due to an unconstrained number of symbols that can be emitted per time step. In this work, we introduce a constrained version of transducer loss to learn strictly monotonic alignments between the sequences; we also improve the standard greedy search and beam search algorithms by limiting the number of symbols that can be emitted per time step in transducer decoding, making it more efficient to decode in parallel with batches. Furthermore, we propose an finite state automaton-based (FSA) parallel beam search algorithm that can run with graphs on GPU efficiently. The experiment results show that we achieve slight word error rate (WER) improvement as well as significant speedup in decoding. Our work is open-sourced and publicly available\footnote{https://github.com/k2-fsa/icefall}.
翻译:在语音识别领域,转换器结构越来越受欢迎,因为它是自然流的,而且准确性也很高。转换器的一个缺点是,由于可以每步排放不受限制的符号数量,很难以快速和平行的方式解码。在这项工作中,我们引入了受限制的转换器损失版本,以学习各序列之间严格的单调一致;我们还改进了标准的贪婪搜索和波束搜索算法,限制在转换器解码过程中每个时间步骤可以排放的符号数量,使其更有效地与批量平行解码。此外,我们提议了一种以有限状态的自动成像(FSA)平行搜索算法,该算法可以与GPU上的图表有效运行。实验结果显示,我们取得了轻微的单词错误率改进,并在解码方面大大加快速度。我们的工作是公开和公开提供的\footote{https://github.com/k2-fsa/icefall}。