With the wide adoption of automated speech recognition (ASR) systems, it is increasingly important to test and improve ASR systems. However, collecting and executing speech test cases is usually expensive and time-consuming, motivating us to strategically prioritize speech test cases. A key question is: how to determine the ideal order of collecting and executing speech test cases to uncover more errors as early as possible? Each speech test case consists of a piece of audio and the corresponding reference text. In this work, we propose PROPHET (PRiOritizing sPeecH tEsT), a tool that predicts potential error-uncovering speech test cases only based on their reference texts. Thus, PROPHET analyzes test cases and prioritizes them without running the ASR system, which can analyze speech test cases at a large scale. We evaluate 6 different prioritization methods on 3 ASR systems and 12 datasets. Given the same testing budget, we find that our approach uncovers 12.63% more wrongly recognized words than the state-of-the-art method. We select test cases from the prioritized list to fine-tune ASR systems and analyze how our approach can improve the ASR system performance. Statistical tests show that our proposed method can bring significantly larger performance improvement to ASR systems than the existing baseline methods. Furthermore, we perform correlation analysis and confirm that fine-tuning an ASR system using a dataset, on which the model performs worse, tends to improve the performance more.
翻译:随着广泛采用自动语音识别系统(ASR),测试和改进自动语音识别系统变得日益重要。然而,收集和实施语音测试案件通常费用昂贵而且耗时费时,促使我们从战略上确定语音测试案件的优先次序。一个关键问题是:如何确定收集和实施语音测试案件的理想顺序,以便尽早发现更多的错误?每个语音测试案件都由一组音频和相应的参考文本组成。在这项工作中,我们提出PROPPHET(PRIOritiizing speecH tEST),这是一个仅根据参考文本预测潜在错误排除语音测试案件的工具。因此,PROPHET分析测试案件并将其优先排序,而不运行ASR系统,该系统可以大规模分析语音测试案件。我们评估了3个ASR系统和12个数据集的6种不同的优先排序方法。根据相同的测试预算,我们发现,我们的方法发现12.63%的识别语言比最先进的方法要错误。我们从优先清单中选择测试案件,只能根据参考文本来微调ASR系统。我们如何进行测试,并分析我们如何大大改进ASR的绩效分析方法。