We propose a new paradigm to help Large Language Models (LLMs) generate more accurate factual knowledge without retrieving from an external corpus, called RECITation-augmented gEneration (RECITE). Different from retrieval-augmented language models that retrieve relevant documents before generating the outputs, given an input, RECITE first recites one or several relevant passages from LLMs' own memory via sampling, and then produces the final answers. We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks. Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance in various closed-book question answering (CBQA) tasks. In experiments, we verify the effectiveness of \method~on four pre-trained models (PaLM, UL2, OPT, and Codex) and three CBQA tasks (Natural Questions, TriviaQA, and HotpotQA). Our code is available at "https://github.com/Edward-Sun/RECITE".
翻译:我们提出了一个新的范式,以帮助大语言模型(LLMS)产生更准确的事实知识,而无需从外部外源检索,称为RECITation-Angeled gEnergation(REGITE) 。不同于检索强化语言模型(RECITE),这种模型在产生产出之前检索相关文件,根据输入,RETET首先从LLMS自己的记忆中通过抽样读取一个或多个相关段落,然后提出最后答案。我们表明RETETE是知识密集型NLP任务的一个强有力的范例。具体地说,我们通过将回引作为中间步骤,我们显示在各种封闭式回答(CBQA)任务中,读和答方案可以达到新的最新状态。在实验中,我们核查四种预培训模式(PALM、UL2、ALT和代码x)和CBA任务(Nalomal Ques、TriviaQA和HotpotQA)的有效性。我们的代码可以在“https://github.com/Edward-S/REGITA)”上查到。