Retrieval-augmented language models (LMs) use non-parametric memory to substantially outperform their non-retrieval counterparts on perplexity-based evaluations, but it is an open question whether they achieve similar gains in few- and zero-shot end-task accuracy. We extensively study one such model, the k-nearest neighbor LM (kNN-LM), showing that the gains marginally transfer. The main challenge is to achieve coverage of the verbalizer tokens that define the different end-task class labels. To address this challenge, we also introduce kNN-Prompt, a simple and effective kNN-LM with automatically expanded fuzzy verbalizers (e.g. to expand terrible to also include silly and other task-specific synonyms for sentiment classification). Across nine diverse end-tasks, using kNN-Prompt with GPT-2 large yields significant performance boosts over strong zero-shot baselines (13.4% absolute improvement over the base LM on average). We also show that other advantages of non-parametric augmentation hold for end tasks; kNN-Prompt is effective for domain adaptation with no further training, and gains increase with the size of the retrieval model.
翻译:Retrieval- 放大语言模型(LMS)使用非参数内存来大大优于非检索对等方的不复式评价,但这是一个未决问题,即它们是否在少发和零发最终任务精确度方面实现类似的增益。我们广泛研究一个这样的模型,即 K- 近距离邻居 LM (kNNN- LM),显示这些增益略微转移。主要的挑战是如何覆盖界定不同终端任务类标签的口头标语。为了应对这一挑战,我们还引入了 kNN- Prompt,一个简单而有效的 kNNNN- / Prompt,这是一个简单而有效的 kNNN- LM, 自动扩展的模糊言语器(例如,扩展可怕,以也包括愚蠢的和其他特定任务同义来进行情绪分类) 。 在九种不同的终端中,使用 kNNN- Prompt 和 GPT-2 的大型使性能大大增强强的零射基线(13.4% 绝对改进了基LM ) 。为了应对这一挑战,我们还表明,非参数性扩增扩增能力的其他优势用于最终任务; kNNNNNPROPROtal 的升级,对于提高和升级,对域的模型是有效的改进。