Interacting with a speech interface to query a Question Answering (QA) system is becoming increasingly popular. Typically, QA systems rely on passage retrieval to select candidate contexts and reading comprehension to extract the final answer. While there has been some attention to improving the reading comprehension part of QA systems against errors that automatic speech recognition (ASR) models introduce, the passage retrieval part remains unexplored. However, such errors can affect the performance of passage retrieval, leading to inferior end-to-end performance. To address this gap, we augment two existing large-scale passage ranking and open domain QA datasets with synthetic ASR noise and study the robustness of lexical and dense retrievers against questions with ASR noise. Furthermore, we study the generalizability of data augmentation techniques across different domains; with each domain being a different language dialect or accent. Finally, we create a new dataset with questions voiced by human users and use their transcriptions to show that the retrieval performance can further degrade when dealing with natural ASR noise instead of synthetic ASR noise.
翻译:与语音界面互动以查询问题解答(QA)系统越来越受欢迎。通常,QA系统依靠通过检索来选择候选背景和阅读理解来提取最后答案。虽然有些关注在自动语音识别(ASR)模式引入的错误方面改进QA系统的阅读理解部分,但通道检索部分仍未探讨。但是,这种错误会影响通过检索的性能,导致端到端的低性能。为了弥补这一差距,我们增加了两个现有的大型通过排名和开放域QA数据集,配有合成的ASR噪音,并研究词汇和密集检索器对使用ASR噪音的问题的稳健性。此外,我们研究不同领域数据增强技术的通用性;每个领域都是不同的语言方言或口音。最后,我们用人类用户提出的问题创建一个新的数据集,并用他们的抄录显示,在处理自然 ASR噪音而不是合成的ASR噪音时,检索性能进一步退化。