Pretrained language models (LMs) are susceptible to generate text with nonfactual information. In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation. We design the FactualityPrompts test set and metrics to measure the factuality of LM generations. Based on that, we study the factual accuracy of LMs with parameter sizes ranging from 126M to 530B. Interestingly, we find that larger LMs are more factual than smaller ones, although a previous study suggests that larger LMs can be less truthful in terms of misconceptions. In addition, popular sampling algorithms (e.g., top-p) in open-ended text generation can harm the factuality due to the ''uniform randomness'' introduced at every sampling step. We propose the factual-nucleus sampling algorithm that dynamically adapts the randomness to improve the factuality of generation while maintaining quality. Furthermore, we analyze the inefficiencies of the standard training method in learning correct associations between entities from factual text corpus (e.g., Wikipedia). We propose a factuality-enhanced training method that uses TopicPrefix for better awareness of facts and sentence completion as the training objective, which can vastly reduce the factual errors. We release our code and FactualityPrompts benchmark at: https://github.com/nayeon7lee/FactualityPrompt.
翻译:事先培训的语言模型(LMS) 容易生成带有非事实信息的文本。 在这项工作中,我们测量并改进用于开放式文本生成的大型LMS(LMS) 的准确性。 我们设计了用于测量LM世代事实质量的“ 事实质量Prompts 测试套件和量度 ” 。 在此基础上,我们用126M至530B的参数大小来研究LMS(LMS) 的事实准确性。 有趣的是,我们发现,较大的LMS(LM)比较小的LM(LM)更符合事实性, 尽管先前的一项研究表明,在错误概念方面,更大的LMMs(LM)可能不那么真实性。 此外,在开放式文本生成中,流行的抽样算法(例如,顶级p)会损害事实质量,因为在每个取样步骤中都采用了“ 统一随机性随机性” 。 我们建议采用事实- 核心抽样算法, 来动态调整随机性来提高生成质量,同时保持质量。 此外,我们分析了标准培训方法在学习实体之间从事实文本集(例如维基) 正确关联中(我们建议了一个事实质量/ 的准确性培训基准中,我们提出一个事实质量/公布客观标准,可以降低我们的目标性培训基准。</s>