Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.
翻译:自动语音识别( ASR) 中错误校正自动语音识别( ASR) 的目的是纠正由 ASR 模型生成的句子中的错误字词。 由于最近的 ASR 模型通常有低字错误率, 以避免影响最初正确的符号, 错误校正模型应该只修改错误字, 因此发现错误字对于错误校正很重要 。 以前的错误校正工作要么通过目标源关注或CTC( 连接时间分类) 损失, 要么明确定位特定的删除/ 替换/ 插入错误错误。 但是, 隐性错误检测并不能提供清晰的信号, 哪些符号是不正确的, 明显错误检测的准确度是低的。 在本文中, 我们提议使用软错误检测机制进行 SoftCorrect, 以避免显性错误检测的局限。 具体地说, 我们首先检测一个符号是否正确, 或者不是通过专门设计的语言模型产生的概率, 然后设计一个限制的 CTCSLAC 损失, 仅重复检测错误符号的解析符号, 与隐含错误的检测, SoftCorrtal 分别提供哪些字的信号信号信号, 而无需复制一次的测算。