Large language models have shown impressive few-shot results on a wide range of tasks. However, when knowledge is key for such results, as is the case for tasks such as question answering and fact checking, massive parameter counts to store knowledge seem to be needed. Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings. In this work we present Atlas, a carefully designed and pre-trained retrieval augmented language model able to learn knowledge intensive tasks with very few training examples. We perform evaluations on a wide range of tasks, including MMLU, KILT and NaturalQuestions, and study the impact of the content of the document index, showing that it can easily be updated. Notably, Atlas reaches over 42\% accuracy on Natural Questions using only 64 examples, outperforming a 540B parameters model by 3% despite having 50x fewer parameters.
翻译:大型语言模型在一系列广泛任务上显示了令人印象深刻的微小结果。然而,当知识是这种结果的关键时,例如问题回答和事实调查等任务,似乎需要大量的参数来储存知识。检索扩展模型被认为在知识密集型任务方面非常出色,不需要如此多参数,但不清楚它们是否在几眼环境中发挥作用。在这项工作中,我们介绍Atlas,这是一个经过仔细设计和预先训练的检索增强语言模型,能够以极少的培训实例来学习知识密集型任务。我们对包括MMMLU、KILT和自然问题在内的广泛任务进行评估,并研究文件索引内容的影响,表明它很容易更新。值得注意的是,Atlas仅使用64个实例,在自然问题上达到42个以上的精确度,比540B参数模型高出3%,尽管参数减少了50倍。