We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. For example, the 6B-parameter GPT-J model was 17% less truthful than its 125M-parameter counterpart. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.
翻译:我们提出了一个衡量语言模式在提出问题答案时是否真实的基准。基准包含包括38个类别,包括卫生、法律、金融和政治的817个问题。我们设计了一些问题,某些人会因为错误的信念或误解而作出错误的答复。为了很好地运行,模型必须避免从模仿人文文本中得出错误的答案。我们测试了GPT-3、GPT-Neo/J、GPT-2和基于T5的模型。最佳模型在58%的问题上是真实的,而人类表现为94%。模型产生了许多假的答案,模仿了流行的误解,并有可能欺骗人类。最大的模型一般是最不真实的。例如,6B参数GPT-J模型比125M参数对应模型少17%的真实性。这与其他NLP任务相比,其性能随着模型规模的提高。但是,如果从培训分布中学习错误的答案,则预期会得出这样的结果。我们认为,仅仅扩大模型的规模比利用网络文本以外的培训目标来改进真实性更不那么有希望。