Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the limitations of relying solely on their parameters to encode a wealth of world knowledge. This paper aims to understand LMs' strengths and limitations in memorizing factual knowledge, by conducting large-scale knowledge probing experiments of 10 models and 4 augmentation methods on PopQA, our new open-domain QA dataset with 14k questions. We find that LMs struggle with less popular factual knowledge, and that scaling fails to appreciably improve memorization of factual knowledge in the tail. We then show that retrieval-augmented LMs largely outperform orders of magnitude larger LMs, while unassisted LMs remain competitive in questions about high-popularity entities. Based on those findings, we devise a simple, yet effective, method for powerful and efficient retrieval-augmented LMs, which retrieves non-parametric memories only when necessary. Experimental results show that this significantly improves models' performance while reducing the inference costs.
翻译:尽管在各种任务上取得了令人印象深刻的业绩,但大型语言模型(LMS)仍然与需要丰富世界知识的任务挣扎不休,这意味着仅仅依靠其参数来编纂丰富的世界知识的局限性。本文的目的是通过对10个模型和4个增强方法进行大规模知识探索实验,了解LMS在记忆事实知识方面的优势和局限性,而PopQA是我们新的开放域卡A数据集,有14k个问题。我们发现LMS与不那么受欢迎的事实知识斗争,而规模的扩大未能明显改善尾部事实知识的记忆。我们随后表明,检索的LMS基本上超越了规模较大的LMS,而无辅助的LMS在有关高流行实体的问题上仍然具有竞争力。基于这些发现,我们设计了一个简单而有效的方法,用于强大而高效的检索-提示LMS,仅在必要的时候才能检索非对称记忆。实验结果表明,这一模型在降低推论成本的同时大大改进了模型的性能。