Recent work on transformer-based neural networks has led to impressive advances on multiple-choice natural language understanding (NLU) problems, such as Question Answering (QA) and abductive reasoning. Despite these advances, there is limited work still on understanding whether these models respond to perturbed multiple-choice instances in a sufficiently robust manner that would allow them to be trusted in real-world situations. We present four confusion probes, inspired by similar phenomena first identified in the behavioral science community, to test for problems such as prior bias and choice paralysis. Experimentally, we probe a widely used transformer-based multiple-choice NLU system using four established benchmark datasets. Here we show that the model exhibits significant prior bias and to a lesser, but still highly significant degree, choice paralysis, in addition to other problems. Our results suggest that stronger testing protocols and additional benchmarks may be necessary before the language models are used in front-facing systems or decision making with real world consequences.
翻译:最近关于以变压器为基础的神经网络的工作导致在多种选择自然语言理解(NLU)问题上取得了令人印象深刻的进展,例如问答(QA)和绑架性推理。尽管取得了这些进展,但在了解这些模型是否以足够稳健的方式对受干扰的多重选择情况作出反应,使其能够在现实世界局势中得到信任方面仍有有限的工作。我们提出了四个混乱问题调查,这些调查受行为科学界最初发现的类似现象的启发,用来测试先前的偏向和选择瘫痪等问题。实验性地,我们利用四个既定的基准数据集,发现了一个广泛使用的基于变压器的多选择NLU系统。我们在这里表明,这些模型除了其他问题之外,在以前表现出严重的偏差,选择瘫痪程度较小,但仍然相当严重。我们的结果表明,在语言模型用于前方系统或具有真实世界后果的决策之前,可能需要更强烈的测试协议和补充基准。