Measuring automatic speech recognition (ASR) system quality is critical for creating user-satisfying voice-driven applications. Word Error Rate (WER) has been traditionally used to evaluate ASR system quality; however, it sometimes correlates poorly with user perception/judgement of transcription quality. This is because WER weighs every word equally and does not consider semantic correctness which has a higher impact on user perception. In this work, we propose evaluating ASR output hypotheses quality with SemDist that can measure semantic correctness by using the distance between the semantic vectors of the reference and hypothesis extracted from a pre-trained language model. Our experimental results of 71K and 36K user annotated ASR output quality show that SemDist achieves higher correlation with user perception than WER. We also show that SemDist has higher correlation with downstream Natural Language Understanding (NLU) tasks than WER.
翻译:测量自动语音识别系统(ASR)系统质量对于创建用户满意的语音驱动应用程序至关重要。 Word 错误率(WER)历来用于评估 ASR 系统质量; 然而,它有时与用户对抄录质量的看法/判断不相符。 这是因为WER对每个单词的评分都一样,不认为语义正确性对用户感知有更高影响。 在这项工作中,我们提议对SemDist 的 ASR 输出假设质量进行评估,该假设可以通过使用从预先培训的语言模型中提取的参考语义矢量和假设之间的距离来测量语义正确性。 我们71K 和 36K 用户的实验结果 附加说明的 ASR 输出质量显示, SemDist 与用户感知比WER 更高。 我们还表明,SemDist 与下游自然语言理解(NLU) 任务的相关性高于WER 。