The use of language-model-based question-answering systems to aid humans in completing difficult tasks is limited, in part, by the unreliability of the text these systems generate. Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown that just a single turn of arguments in this format is not helpful to humans. However, as debate settings are characterized by a back-and-forth dialogue, we follow up on previous results to test whether adding a second round of counter-arguments is helpful to humans. We find that, regardless of whether they have access to arguments or not, humans perform similarly on our task. These findings suggest that, in the case of answering reading comprehension questions, debate is not a helpful format.
翻译:使用基于语言模型的问答系统来帮助人类完成困难的任务,其部分原因是这些系统产生的文本不可靠。使用硬性多选择阅读理解问题作为测试台,我们评估在两种相互竞争的答案选项(其中一种正确,而另一种不正确)中,是否以论据向人类展示人,使人类法官能够更准确地开展工作,即使其中一种论据不可靠而且具有欺骗性;如果这样做有用,我们也许能够增加我们对基于语言模型的系统的合理信任,要求他们在必要时提出这些论点。以前的研究表明,这种格式只有一转一转,就无助于人类。然而,由于辩论环境的特征是前后对话,我们跟踪以往的结果,以检验增加第二轮反争论是否有利于人类。我们发现,不管他们是否有机会获得论据,人类在执行我们的任务时表现相似。这些研究结果表明,在回答理解问题时,辩论不是有用的格式。