Language models have recently achieved strong performance across a wide range of NLP benchmarks. However, unlike benchmarks, real world tasks are often poorly specified, and agents must deduce the user's intended behavior from a combination of context, instructions, and examples. We investigate how both humans and models behave in the face of such task ambiguity by proposing AmbiBench, a new benchmark of six ambiguously-specified classification tasks. We evaluate humans and models on AmbiBench by seeing how well they identify the intended task using 1) instructions with varying degrees of ambiguity, and 2) different numbers of labeled examples. We find that the combination of model scaling (to 175B parameters) and training with human feedback data enables models to approach or exceed the accuracy of human participants across tasks, but that either one alone is not sufficient. In addition, we show how to dramatically improve the accuracy of language models trained without large-scale human feedback training by finetuning on a small number of ambiguous in-context examples, providing a promising direction for teaching models to generalize well in the face of ambiguity.
翻译:最近,语言模型在各种各样的国家语言规划基准中取得了显著的成绩。然而,与基准不同,现实世界任务往往没有很好地说明具体内容,代理商必须结合背景、指示和实例来推断用户的预期行为。我们调查人类和模型面对这种任务模棱两可的情况,为此提出一个由六种不明确规定的分类任务组成的新基准AmbiBench。我们评估人和模型时,看到他们使用1种不同程度的模糊性的指示和2种不同数目的标签例子来确定预定任务有多好。我们发现,模型规模(至175B参数)和培训与人类反馈数据相结合,使模型能够接近或超过人类参与者跨任务的准确度,但单靠其中一种是不够的。此外,我们展示如何通过微调少量的模糊性文字实例来大幅提高未经大规模人类反馈培训而培训的语言模型的准确性,从而为教学模型在模棱两可时加以概括化提供有希望的方向。