Word Error Rate (WER) has been the predominant metric used to evaluate the performance of automatic speech recognition (ASR) systems. However, WER is sometimes not a good indicator for downstream Natural Language Understanding (NLU) tasks, such as intent recognition, slot filling, and semantic parsing in task-oriented dialog systems. This is because WER takes into consideration only literal correctness instead of semantic correctness, the latter of which is typically more important for these downstream tasks. In this study, we propose a novel Semantic Distance (SemDist) measure as an alternative evaluation metric for ASR systems to address this issue. We define SemDist as the distance between a reference and hypothesis pair in a sentence-level embedding space. To represent the reference and hypothesis as a sentence embedding, we exploit RoBERTa, a state-of-the-art pre-trained deep contextualized language model based on the transformer architecture. We demonstrate the effectiveness of our proposed metric on various downstream tasks, including intent recognition, semantic parsing, and named entity recognition.
翻译:Word错误率(WER)一直是用来评价自动语音识别系统(ASR)性能的主要衡量标准。然而,WER有时不是下游自然语言理解(NLU)任务的好指标,例如意向识别、空档填充和任务导向对话系统中的语义分解。这是因为WER只考虑字面正确性而不是语义正确性,后者对于这些下游任务来说通常更为重要。在本研究中,我们提出了一个创新的语义距离(SemDist)衡量标准,作为ASR系统解决这一问题的替代评价指标。我们把SemDist定义为在句级嵌入空间中的参考和假设配对之间的距离。为了将引用和假设作为句子嵌入的句子,我们利用了RBERTA,这是以变压器结构为基础的、经过预先训练的深背景化语言模型。我们展示了我们关于各种下游任务的拟议衡量标准的有效性,包括意向识别、语义区分和命名实体识别。