When Question-Answering (QA) systems are deployed in the real world, users query them through a variety of interfaces, such as speaking to voice assistants, typing questions into a search engine, or even translating questions to languages supported by the QA system. While there has been significant community attention devoted to identifying correct answers in passages assuming a perfectly formed question, we show that components in the pipeline that precede an answering engine can introduce varied and considerable sources of error, and performance can degrade substantially based on these upstream noise sources even for powerful pre-trained QA models. We conclude that there is substantial room for progress before QA systems can be effectively deployed, highlight the need for QA evaluation to expand to consider real-world use, and hope that our findings will spur greater community interest in the issues that arise when our systems actually need to be of utility to humans.
翻译:当问题查询(QA)系统在现实世界部署时,用户通过多种界面进行查询,例如向语音助理讲话,将问题输入搜索引擎,甚至将问题翻译成质量A系统所支持的语言。虽然社区对在假定一个完全形成的问题的段落中找到正确答案给予了极大的关注,但我们表明,在回答引擎之前的管道中的各个部件可以引入多种和相当大的错误源,而且即使对于经过训练的强大的质量评估模式,这些上游噪音源的性能也可能大大退化。我们的结论是,在有效部署质量评估系统之前,有很大的进展空间,强调质量评估需要扩大,以考虑现实世界的使用情况,并希望我们的调查结果将激发社区对于当我们的系统确实需要对人类有用时出现的问题的更大兴趣。